Test Report: QEMU_macOS 20091

                    
                      6f6ff76044c36bcb4277257fa9dc7e7f34dfce32:2024-12-16:37513
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.96
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.25
48 TestCertOptions 12.22
49 TestCertExpiration 197.98
50 TestDockerFlags 12.83
51 TestForceSystemdFlag 12.13
52 TestForceSystemdEnv 10.22
97 TestFunctional/parallel/ServiceCmdConnect 39.11
162 TestMultiControlPlane/serial/StartCluster 250.51
163 TestMultiControlPlane/serial/DeployApp 74.79
164 TestMultiControlPlane/serial/PingHostFromPods 0.1
165 TestMultiControlPlane/serial/AddWorkerNode 0.09
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
169 TestMultiControlPlane/serial/StopSecondaryNode 0.13
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
171 TestMultiControlPlane/serial/RestartSecondaryNode 0.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 1473.93
184 TestJSONOutput/start/Command 250.36
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.05
196 TestJSONOutput/unpause/Command 0.06
202 TestJSONOutput/stop/Command 184.07
216 TestMountStart/serial/StartWithMountFirst 10.17
219 TestMultiNode/serial/FreshStart2Nodes 10.05
220 TestMultiNode/serial/DeployApp2Nodes 110.68
221 TestMultiNode/serial/PingHostFrom2Pods 0.1
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.07
224 TestMultiNode/serial/ProfileList 0.09
225 TestMultiNode/serial/CopyFile 0.07
226 TestMultiNode/serial/StopNode 0.16
227 TestMultiNode/serial/StartAfterStop 49.69
228 TestMultiNode/serial/RestartKeepsNodes 8.81
229 TestMultiNode/serial/DeleteNode 0.11
230 TestMultiNode/serial/StopMultiNode 3.76
231 TestMultiNode/serial/RestartMultiNode 5.27
232 TestMultiNode/serial/ValidateNameConflict 20.9
236 TestPreload 10
238 TestScheduledStopUnix 10.24
239 TestSkaffold 12.66
242 TestRunningBinaryUpgrade 605.48
244 TestKubernetesUpgrade 18.45
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.05
260 TestStoppedBinaryUpgrade/Upgrade 573.04
262 TestPause/serial/Start 9.99
272 TestNoKubernetes/serial/StartWithK8s 9.84
273 TestNoKubernetes/serial/StartWithStopK8s 5.32
274 TestNoKubernetes/serial/Start 5.31
278 TestNoKubernetes/serial/StartNoArgs 5.34
280 TestNetworkPlugins/group/auto/Start 10.06
281 TestNetworkPlugins/group/kindnet/Start 9.86
282 TestNetworkPlugins/group/calico/Start 9.91
283 TestNetworkPlugins/group/custom-flannel/Start 9.85
284 TestNetworkPlugins/group/false/Start 9.78
285 TestNetworkPlugins/group/enable-default-cni/Start 9.88
286 TestNetworkPlugins/group/flannel/Start 9.97
287 TestNetworkPlugins/group/bridge/Start 9.96
289 TestNetworkPlugins/group/kubenet/Start 9.99
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.03
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/old-k8s-version/serial/Pause 0.12
302 TestStartStop/group/no-preload/serial/FirstStart 10.03
304 TestStartStop/group/embed-certs/serial/FirstStart 11.45
305 TestStartStop/group/no-preload/serial/DeployApp 0.12
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.23
309 TestStartStop/group/no-preload/serial/SecondStart 5.31
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/embed-certs/serial/SecondStart 6.16
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
318 TestStartStop/group/no-preload/serial/Pause 0.11
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.03
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/embed-certs/serial/Pause 0.12
326 TestStartStop/group/newest-cni/serial/FirstStart 10.42
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.39
336 TestStartStop/group/newest-cni/serial/SecondStart 5.26
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (28.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-651000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-651000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (28.962064166s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f0bd63a1-1501-47e9-a1f4-c99fcb6811df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-651000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e999ba00-e623-42ea-b3cd-ea11f9e77594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"c42d3e33-3262-4ea5-8f9f-7c502c7ad281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig"}}
	{"specversion":"1.0","id":"0a724ab2-ff3c-4288-a11c-bbbbbb9530f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"42a3be12-95ba-4425-823c-50b1ca6a1ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6cceca7d-dc6c-4345-bc8e-63b711a7d136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube"}}
	{"specversion":"1.0","id":"104d2b3e-8026-4362-8cc7-d937e5bdc84b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"287c8725-797c-4118-980e-517a17e4625b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1edc6575-ae31-401c-86f6-092244f8ea9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b8143a80-dbe5-4c6b-95e3-5e5aa09c5bc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b021e228-dc8f-4de9-8d58-2de3a48ed8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-651000\" primary control-plane node in \"download-only-651000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff695516-b85b-405c-b3ec-c2c7864fdb36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"be3e6e24-45f9-48da-9311-7378acde81b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600] Decompressors:map[bz2:0x14000737d00 gz:0x14000737d08 tar:0x14000737cb0 tar.bz2:0x14000737cc0 tar.gz:0x14000737cd0 tar.xz:0x14000737ce0 tar.zst:0x14000737cf0 tbz2:0x14000737cc0 tgz:0x140
00737cd0 txz:0x14000737ce0 tzst:0x14000737cf0 xz:0x14000737d10 zip:0x14000737d20 zst:0x14000737d18] Getters:map[file:0x14000a666d0 http:0x140008880a0 https:0x140008880f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ced043d2-19d5-44c1-92bd-372c9657edf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:34:30.123336    1495 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:34:30.123499    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:30.123503    1495 out.go:358] Setting ErrFile to fd 2...
	I1216 11:34:30.123505    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:30.123637    1495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	W1216 11:34:30.123703    1495 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20091-990/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20091-990/.minikube/config/config.json: no such file or directory
	I1216 11:34:30.125091    1495 out.go:352] Setting JSON to true
	I1216 11:34:30.144093    1495 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":241,"bootTime":1734377429,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:34:30.144162    1495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:34:30.148957    1495 out.go:97] [download-only-651000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:34:30.149162    1495 notify.go:220] Checking for updates...
	W1216 11:34:30.149203    1495 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 11:34:30.153033    1495 out.go:169] MINIKUBE_LOCATION=20091
	I1216 11:34:30.160084    1495 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:34:30.164965    1495 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:34:30.169014    1495 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:34:30.171919    1495 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	W1216 11:34:30.177984    1495 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 11:34:30.178225    1495 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:34:30.180708    1495 out.go:97] Using the qemu2 driver based on user configuration
	I1216 11:34:30.180728    1495 start.go:297] selected driver: qemu2
	I1216 11:34:30.180751    1495 start.go:901] validating driver "qemu2" against <nil>
	I1216 11:34:30.180829    1495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:34:30.183970    1495 out.go:169] Automatically selected the socket_vmnet network
	I1216 11:34:30.189870    1495 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 11:34:30.189970    1495 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:34:30.189999    1495 cni.go:84] Creating CNI manager for ""
	I1216 11:34:30.190042    1495 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 11:34:30.190105    1495 start.go:340] cluster config:
	{Name:download-only-651000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-651000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:34:30.194631    1495 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:34:30.198991    1495 out.go:97] Downloading VM boot image ...
	I1216 11:34:30.199015    1495 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso
	I1216 11:34:41.876049    1495 out.go:97] Starting "download-only-651000" primary control-plane node in "download-only-651000" cluster
	I1216 11:34:41.876070    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:41.933921    1495 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 11:34:41.933944    1495 cache.go:56] Caching tarball of preloaded images
	I1216 11:34:41.934118    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:41.939216    1495 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 11:34:41.939222    1495 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:42.021226    1495 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 11:34:57.707195    1495 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:57.707394    1495 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:58.401718    1495 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 11:34:58.401916    1495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/download-only-651000/config.json ...
	I1216 11:34:58.401932    1495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/download-only-651000/config.json: {Name:mkc47c5693f7ee3d018304764c489840134397e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:34:58.402219    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:58.402469    1495 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1216 11:34:59.004361    1495 out.go:193] 
	W1216 11:34:59.010242    1495 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600] Decompressors:map[bz2:0x14000737d00 gz:0x14000737d08 tar:0x14000737cb0 tar.bz2:0x14000737cc0 tar.gz:0x14000737cd0 tar.xz:0x14000737ce0 tar.zst:0x14000737cf0 tbz2:0x14000737cc0 tgz:0x14000737cd0 txz:0x14000737ce0 tzst:0x14000737cf0 xz:0x14000737d10 zip:0x14000737d20 zst:0x14000737d18] Getters:map[file:0x14000a666d0 http:0x140008880a0 https:0x140008880f0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1216 11:34:59.010266    1495 out_reason.go:110] 
	W1216 11:34:59.018180    1495 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:34:59.021101    1495 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-651000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (28.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-850000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-850000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.087562458s)

                                                
                                                
-- stdout --
	* [offline-docker-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-850000" primary control-plane node in "offline-docker-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:29:34.517288    5609 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:29:34.517477    5609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:34.517479    5609 out.go:358] Setting ErrFile to fd 2...
	I1216 12:29:34.517482    5609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:34.517627    5609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:29:34.518874    5609 out.go:352] Setting JSON to false
	I1216 12:29:34.537864    5609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3545,"bootTime":1734377429,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:29:34.537943    5609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:29:34.542887    5609 out.go:177] * [offline-docker-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:29:34.549807    5609 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:29:34.549865    5609 notify.go:220] Checking for updates...
	I1216 12:29:34.558767    5609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:29:34.561820    5609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:29:34.564736    5609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:29:34.567808    5609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:29:34.570856    5609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:29:34.574120    5609 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:34.574177    5609 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:29:34.577744    5609 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:29:34.584770    5609 start.go:297] selected driver: qemu2
	I1216 12:29:34.584782    5609 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:29:34.584796    5609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:29:34.587087    5609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:29:34.589752    5609 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:29:34.593849    5609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:29:34.593872    5609 cni.go:84] Creating CNI manager for ""
	I1216 12:29:34.593892    5609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:29:34.593900    5609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:29:34.593935    5609 start.go:340] cluster config:
	{Name:offline-docker-850000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:offline-docker-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:29:34.598970    5609 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:34.602739    5609 out.go:177] * Starting "offline-docker-850000" primary control-plane node in "offline-docker-850000" cluster
	I1216 12:29:34.610779    5609 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:29:34.610811    5609 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:29:34.610824    5609 cache.go:56] Caching tarball of preloaded images
	I1216 12:29:34.610921    5609 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:29:34.610927    5609 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:29:34.610998    5609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/offline-docker-850000/config.json ...
	I1216 12:29:34.611007    5609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/offline-docker-850000/config.json: {Name:mk0017fb62ca786257f8f270ca0f1af3c2d432f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:29:34.611485    5609 start.go:360] acquireMachinesLock for offline-docker-850000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:34.611531    5609 start.go:364] duration metric: took 40.083µs to acquireMachinesLock for "offline-docker-850000"
	I1216 12:29:34.611545    5609 start.go:93] Provisioning new machine with config: &{Name:offline-docker-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.32.0 ClusterName:offline-docker-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:34.611575    5609 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:34.615779    5609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:34.631380    5609 start.go:159] libmachine.API.Create for "offline-docker-850000" (driver="qemu2")
	I1216 12:29:34.631421    5609 client.go:168] LocalClient.Create starting
	I1216 12:29:34.631497    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:34.631537    5609 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:34.631553    5609 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:34.631599    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:34.631630    5609 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:34.631645    5609 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:34.632132    5609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:34.792642    5609 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:34.963170    5609 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:34.963179    5609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:34.963423    5609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:34.973984    5609 main.go:141] libmachine: STDOUT: 
	I1216 12:29:34.974014    5609 main.go:141] libmachine: STDERR: 
	I1216 12:29:34.974099    5609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2 +20000M
	I1216 12:29:34.984787    5609 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:34.984809    5609 main.go:141] libmachine: STDERR: 
	I1216 12:29:34.984834    5609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:34.984839    5609 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:34.984855    5609 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:34.984888    5609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b3:84:e9:f2:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:34.986918    5609 main.go:141] libmachine: STDOUT: 
	I1216 12:29:34.986938    5609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:34.986958    5609 client.go:171] duration metric: took 355.528417ms to LocalClient.Create
	I1216 12:29:36.987194    5609 start.go:128] duration metric: took 2.375593625s to createHost
	I1216 12:29:36.987212    5609 start.go:83] releasing machines lock for "offline-docker-850000", held for 2.375657042s
	W1216 12:29:36.987225    5609 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:36.995952    5609 out.go:177] * Deleting "offline-docker-850000" in qemu2 ...
	W1216 12:29:37.009153    5609 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:37.009160    5609 start.go:729] Will try again in 5 seconds ...
	I1216 12:29:42.011287    5609 start.go:360] acquireMachinesLock for offline-docker-850000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:42.011409    5609 start.go:364] duration metric: took 97.625µs to acquireMachinesLock for "offline-docker-850000"
	I1216 12:29:42.011454    5609 start.go:93] Provisioning new machine with config: &{Name:offline-docker-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.32.0 ClusterName:offline-docker-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:42.011531    5609 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:42.020039    5609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:42.034931    5609 start.go:159] libmachine.API.Create for "offline-docker-850000" (driver="qemu2")
	I1216 12:29:42.034960    5609 client.go:168] LocalClient.Create starting
	I1216 12:29:42.035029    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:42.035069    5609 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:42.035078    5609 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:42.035119    5609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:42.035147    5609 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:42.035154    5609 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:42.035469    5609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:42.311395    5609 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:42.486674    5609 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:42.486686    5609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:42.486925    5609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:42.496983    5609 main.go:141] libmachine: STDOUT: 
	I1216 12:29:42.496998    5609 main.go:141] libmachine: STDERR: 
	I1216 12:29:42.497051    5609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2 +20000M
	I1216 12:29:42.506753    5609 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:42.506793    5609 main.go:141] libmachine: STDERR: 
	I1216 12:29:42.506818    5609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:42.506831    5609 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:42.506841    5609 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:42.506882    5609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:d7:25:39:ab:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/offline-docker-850000/disk.qcow2
	I1216 12:29:42.509270    5609 main.go:141] libmachine: STDOUT: 
	I1216 12:29:42.509293    5609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:42.509308    5609 client.go:171] duration metric: took 474.341ms to LocalClient.Create
	I1216 12:29:44.510310    5609 start.go:128] duration metric: took 2.498698625s to createHost
	I1216 12:29:44.510379    5609 start.go:83] releasing machines lock for "offline-docker-850000", held for 2.498928833s
	W1216 12:29:44.510722    5609 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:44.532205    5609 out.go:201] 
	W1216 12:29:44.536425    5609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:29:44.536450    5609 out.go:270] * 
	* 
	W1216 12:29:44.539112    5609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:29:44.556403    5609 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-850000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-16 12:29:44.574353 -0800 PST m=+3314.454547751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-850000 -n offline-docker-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-850000 -n offline-docker-850000: exit status 7 (70.818333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-850000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-850000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-850000
--- FAIL: TestOffline (10.25s)

                                                
                                    
x
+
TestCertOptions (12.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-970000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-970000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.934220833s)

                                                
                                                
-- stdout --
	* [cert-options-970000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-970000" primary control-plane node in "cert-options-970000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-970000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-970000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-970000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-970000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-970000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (84.853666ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-970000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-970000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-970000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-970000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-970000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-970000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.763166ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-970000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-970000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-970000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-970000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-970000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-16 12:30:19.876716 -0800 PST m=+3349.756613584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-970000 -n cert-options-970000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-970000 -n cert-options-970000: exit status 7 (35.210208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-970000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-970000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-970000
--- FAIL: TestCertOptions (12.22s)

                                                
                                    
x
+
TestCertExpiration (197.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.610658333s)

                                                
                                                
-- stdout --
	* [cert-expiration-027000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-027000" primary control-plane node in "cert-expiration-027000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-027000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-027000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E1216 12:30:18.631023    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.211527541s)

                                                
                                                
-- stdout --
	* [cert-expiration-027000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-027000" primary control-plane node in "cert-expiration-027000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-027000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-027000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-027000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-027000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-027000" primary control-plane node in "cert-expiration-027000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-027000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-027000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-027000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-16 12:33:22.559268 -0800 PST m=+3532.437630042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-027000 -n cert-expiration-027000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-027000 -n cert-expiration-027000: exit status 7 (72.96875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-027000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-027000
--- FAIL: TestCertExpiration (197.98s)

                                                
                                    
x
+
TestDockerFlags (12.83s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-213000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-213000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.406231208s)

                                                
                                                
-- stdout --
	* [docker-flags-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-213000" primary control-plane node in "docker-flags-213000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-213000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:29:54.983641    5796 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:29:54.983780    5796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:54.983784    5796 out.go:358] Setting ErrFile to fd 2...
	I1216 12:29:54.983786    5796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:54.983935    5796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:29:54.985090    5796 out.go:352] Setting JSON to false
	I1216 12:29:55.003345    5796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3565,"bootTime":1734377429,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:29:55.003428    5796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:29:55.020916    5796 out.go:177] * [docker-flags-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:29:55.030840    5796 notify.go:220] Checking for updates...
	I1216 12:29:55.035714    5796 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:29:55.042959    5796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:29:55.051844    5796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:29:55.059824    5796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:29:55.064016    5796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:29:55.066868    5796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:29:55.071106    5796 config.go:182] Loaded profile config "force-systemd-flag-039000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:55.071173    5796 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:55.071222    5796 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:29:55.075812    5796 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:29:55.081800    5796 start.go:297] selected driver: qemu2
	I1216 12:29:55.081807    5796 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:29:55.081824    5796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:29:55.084065    5796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:29:55.086857    5796 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:29:55.089951    5796 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1216 12:29:55.089978    5796 cni.go:84] Creating CNI manager for ""
	I1216 12:29:55.090002    5796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:29:55.090010    5796 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:29:55.090037    5796 start.go:340] cluster config:
	{Name:docker-flags-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:docker-flags-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:29:55.094483    5796 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:55.102898    5796 out.go:177] * Starting "docker-flags-213000" primary control-plane node in "docker-flags-213000" cluster
	I1216 12:29:55.106727    5796 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:29:55.106750    5796 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:29:55.106763    5796 cache.go:56] Caching tarball of preloaded images
	I1216 12:29:55.106831    5796 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:29:55.106836    5796 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:29:55.106898    5796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/docker-flags-213000/config.json ...
	I1216 12:29:55.106908    5796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/docker-flags-213000/config.json: {Name:mk7ae780976490f4d3b3c684fd76a1ae4ce0e3c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:29:55.107371    5796 start.go:360] acquireMachinesLock for docker-flags-213000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:57.061588    5796 start.go:364] duration metric: took 1.954155041s to acquireMachinesLock for "docker-flags-213000"
	I1216 12:29:57.061679    5796 start.go:93] Provisioning new machine with config: &{Name:docker-flags-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:docker-flags-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:57.061836    5796 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:57.070430    5796 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:57.118226    5796 start.go:159] libmachine.API.Create for "docker-flags-213000" (driver="qemu2")
	I1216 12:29:57.118291    5796 client.go:168] LocalClient.Create starting
	I1216 12:29:57.118446    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:57.118516    5796 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:57.118537    5796 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:57.118602    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:57.118660    5796 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:57.118673    5796 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:57.119458    5796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:57.308959    5796 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:57.495437    5796 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:57.495445    5796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:57.495703    5796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:29:57.505913    5796 main.go:141] libmachine: STDOUT: 
	I1216 12:29:57.505932    5796 main.go:141] libmachine: STDERR: 
	I1216 12:29:57.506003    5796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2 +20000M
	I1216 12:29:57.514582    5796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:57.514606    5796 main.go:141] libmachine: STDERR: 
	I1216 12:29:57.514627    5796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:29:57.514635    5796 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:57.514644    5796 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:57.514678    5796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a1:0d:32:2e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:29:57.516512    5796 main.go:141] libmachine: STDOUT: 
	I1216 12:29:57.516525    5796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:57.516546    5796 client.go:171] duration metric: took 398.242708ms to LocalClient.Create
	I1216 12:29:59.518767    5796 start.go:128] duration metric: took 2.456883041s to createHost
	I1216 12:29:59.518868    5796 start.go:83] releasing machines lock for "docker-flags-213000", held for 2.457227166s
	W1216 12:29:59.518924    5796 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:59.540101    5796 out.go:177] * Deleting "docker-flags-213000" in qemu2 ...
	W1216 12:29:59.576794    5796 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:59.576819    5796 start.go:729] Will try again in 5 seconds ...
	I1216 12:30:04.577102    5796 start.go:360] acquireMachinesLock for docker-flags-213000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:30:04.577187    5796 start.go:364] duration metric: took 58.375µs to acquireMachinesLock for "docker-flags-213000"
	I1216 12:30:04.577223    5796 start.go:93] Provisioning new machine with config: &{Name:docker-flags-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:docker-flags-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:30:04.577269    5796 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:30:04.584542    5796 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:30:04.599567    5796 start.go:159] libmachine.API.Create for "docker-flags-213000" (driver="qemu2")
	I1216 12:30:04.599590    5796 client.go:168] LocalClient.Create starting
	I1216 12:30:04.599653    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:30:04.599681    5796 main.go:141] libmachine: Decoding PEM data...
	I1216 12:30:04.599689    5796 main.go:141] libmachine: Parsing certificate...
	I1216 12:30:04.599730    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:30:04.599746    5796 main.go:141] libmachine: Decoding PEM data...
	I1216 12:30:04.599752    5796 main.go:141] libmachine: Parsing certificate...
	I1216 12:30:04.600043    5796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:30:05.034618    5796 main.go:141] libmachine: Creating SSH key...
	I1216 12:30:05.290668    5796 main.go:141] libmachine: Creating Disk image...
	I1216 12:30:05.290682    5796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:30:05.290943    5796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:30:05.301138    5796 main.go:141] libmachine: STDOUT: 
	I1216 12:30:05.301163    5796 main.go:141] libmachine: STDERR: 
	I1216 12:30:05.301236    5796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2 +20000M
	I1216 12:30:05.309900    5796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:30:05.309914    5796 main.go:141] libmachine: STDERR: 
	I1216 12:30:05.309928    5796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:30:05.309935    5796 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:30:05.309944    5796 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:30:05.309979    5796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:d0:ae:bd:09:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/docker-flags-213000/disk.qcow2
	I1216 12:30:05.311729    5796 main.go:141] libmachine: STDOUT: 
	I1216 12:30:05.311744    5796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:30:05.311763    5796 client.go:171] duration metric: took 712.163333ms to LocalClient.Create
	I1216 12:30:07.313957    5796 start.go:128] duration metric: took 2.736642125s to createHost
	I1216 12:30:07.314030    5796 start.go:83] releasing machines lock for "docker-flags-213000", held for 2.736806416s
	W1216 12:30:07.314438    5796 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:30:07.325949    5796 out.go:201] 
	W1216 12:30:07.330271    5796 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:30:07.330311    5796 out.go:270] * 
	* 
	W1216 12:30:07.332893    5796 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:30:07.344148    5796 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-213000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-213000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-213000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (107.503583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-213000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-213000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-213000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-213000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-213000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-213000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-213000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-213000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-213000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (99.7385ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-213000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-213000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-213000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-213000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-213000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-213000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-16 12:30:07.563307 -0800 PST m=+3337.443307501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-213000 -n docker-flags-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-213000 -n docker-flags-213000: exit status 7 (42.05775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-213000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-213000
--- FAIL: TestDockerFlags (12.83s)

                                                
                                    
x
+
TestForceSystemdFlag (12.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-039000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-039000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.861948958s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-039000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-039000" primary control-plane node in "force-systemd-flag-039000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-039000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:29:52.636834    5782 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:29:52.636993    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:52.636996    5782 out.go:358] Setting ErrFile to fd 2...
	I1216 12:29:52.636998    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:52.637126    5782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:29:52.638280    5782 out.go:352] Setting JSON to false
	I1216 12:29:52.655869    5782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3563,"bootTime":1734377429,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:29:52.655952    5782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:29:52.715590    5782 out.go:177] * [force-systemd-flag-039000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:29:52.725533    5782 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:29:52.725544    5782 notify.go:220] Checking for updates...
	I1216 12:29:52.740416    5782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:29:52.744443    5782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:29:52.747392    5782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:29:52.750426    5782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:29:52.753424    5782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:29:52.756926    5782 config.go:182] Loaded profile config "force-systemd-env-227000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:52.757055    5782 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:52.757134    5782 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:29:52.760497    5782 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:29:52.766390    5782 start.go:297] selected driver: qemu2
	I1216 12:29:52.766400    5782 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:29:52.766408    5782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:29:52.770164    5782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:29:52.774376    5782 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:29:52.777586    5782 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 12:29:52.777606    5782 cni.go:84] Creating CNI manager for ""
	I1216 12:29:52.777639    5782 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:29:52.777645    5782 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:29:52.777684    5782 start.go:340] cluster config:
	{Name:force-systemd-flag-039000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:29:52.784510    5782 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:52.793448    5782 out.go:177] * Starting "force-systemd-flag-039000" primary control-plane node in "force-systemd-flag-039000" cluster
	I1216 12:29:52.797259    5782 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:29:52.797282    5782 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:29:52.797298    5782 cache.go:56] Caching tarball of preloaded images
	I1216 12:29:52.797411    5782 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:29:52.797420    5782 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:29:52.797512    5782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/force-systemd-flag-039000/config.json ...
	I1216 12:29:52.797529    5782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/force-systemd-flag-039000/config.json: {Name:mkbfeb566424c534e9dd0378c73975b34badfc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:29:52.798216    5782 start.go:360] acquireMachinesLock for force-systemd-flag-039000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:54.542152    5782 start.go:364] duration metric: took 1.743870417s to acquireMachinesLock for "force-systemd-flag-039000"
	I1216 12:29:54.542381    5782 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-039000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:54.542632    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:54.551825    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:54.601906    5782 start.go:159] libmachine.API.Create for "force-systemd-flag-039000" (driver="qemu2")
	I1216 12:29:54.601958    5782 client.go:168] LocalClient.Create starting
	I1216 12:29:54.602141    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:54.602216    5782 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:54.602242    5782 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:54.602319    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:54.602385    5782 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:54.602401    5782 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:54.603094    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:54.979605    5782 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:55.026800    5782 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:55.026807    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:55.027012    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:29:55.044172    5782 main.go:141] libmachine: STDOUT: 
	I1216 12:29:55.044192    5782 main.go:141] libmachine: STDERR: 
	I1216 12:29:55.044259    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2 +20000M
	I1216 12:29:55.056984    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:55.057007    5782 main.go:141] libmachine: STDERR: 
	I1216 12:29:55.057022    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:29:55.057027    5782 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:55.057037    5782 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:55.057081    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a2:60:cd:bf:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:29:55.059109    5782 main.go:141] libmachine: STDOUT: 
	I1216 12:29:55.059125    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:55.059147    5782 client.go:171] duration metric: took 457.1775ms to LocalClient.Create
	I1216 12:29:57.061390    5782 start.go:128] duration metric: took 2.518701208s to createHost
	I1216 12:29:57.061450    5782 start.go:83] releasing machines lock for "force-systemd-flag-039000", held for 2.519243125s
	W1216 12:29:57.061486    5782 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:57.079424    5782 out.go:177] * Deleting "force-systemd-flag-039000" in qemu2 ...
	W1216 12:29:57.106964    5782 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:57.106987    5782 start.go:729] Will try again in 5 seconds ...
	I1216 12:30:02.109331    5782 start.go:360] acquireMachinesLock for force-systemd-flag-039000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:30:02.109782    5782 start.go:364] duration metric: took 353.875µs to acquireMachinesLock for "force-systemd-flag-039000"
	I1216 12:30:02.109938    5782 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-039000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:30:02.110079    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:30:02.125774    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:30:02.167129    5782 start.go:159] libmachine.API.Create for "force-systemd-flag-039000" (driver="qemu2")
	I1216 12:30:02.167185    5782 client.go:168] LocalClient.Create starting
	I1216 12:30:02.167334    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:30:02.167414    5782 main.go:141] libmachine: Decoding PEM data...
	I1216 12:30:02.167434    5782 main.go:141] libmachine: Parsing certificate...
	I1216 12:30:02.167538    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:30:02.167615    5782 main.go:141] libmachine: Decoding PEM data...
	I1216 12:30:02.167632    5782 main.go:141] libmachine: Parsing certificate...
	I1216 12:30:02.168367    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:30:02.343586    5782 main.go:141] libmachine: Creating SSH key...
	I1216 12:30:02.411483    5782 main.go:141] libmachine: Creating Disk image...
	I1216 12:30:02.411490    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:30:02.411725    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:30:02.421539    5782 main.go:141] libmachine: STDOUT: 
	I1216 12:30:02.421561    5782 main.go:141] libmachine: STDERR: 
	I1216 12:30:02.421634    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2 +20000M
	I1216 12:30:02.430244    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:30:02.430302    5782 main.go:141] libmachine: STDERR: 
	I1216 12:30:02.430316    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:30:02.430325    5782 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:30:02.430336    5782 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:30:02.430368    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1c:12:11:58:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-flag-039000/disk.qcow2
	I1216 12:30:02.432164    5782 main.go:141] libmachine: STDOUT: 
	I1216 12:30:02.432177    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:30:02.432190    5782 client.go:171] duration metric: took 264.997708ms to LocalClient.Create
	I1216 12:30:04.434466    5782 start.go:128] duration metric: took 2.324313583s to createHost
	I1216 12:30:04.434543    5782 start.go:83] releasing machines lock for "force-systemd-flag-039000", held for 2.324706125s
	W1216 12:30:04.434756    5782 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-039000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-039000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:30:04.440239    5782 out.go:201] 
	W1216 12:30:04.446364    5782 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:30:04.446409    5782 out.go:270] * 
	* 
	W1216 12:30:04.447623    5782 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:30:04.457773    5782 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-039000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-039000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-039000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.46525ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-039000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-039000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-039000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-16 12:30:04.547955 -0800 PST m=+3334.427980792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-039000 -n force-systemd-flag-039000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-039000 -n force-systemd-flag-039000: exit status 7 (38.318959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-039000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-039000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-039000
--- FAIL: TestForceSystemdFlag (12.13s)

                                                
                                    
x
+
TestForceSystemdEnv (10.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-227000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1216 12:29:47.098539    1494 install.go:79] stdout: 
W1216 12:29:47.098757    1494 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1216 12:29:47.098785    1494 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit]
I1216 12:29:47.115945    1494 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit]
I1216 12:29:47.128473    1494 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit]
I1216 12:29:47.139922    1494 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit]
I1216 12:29:47.161345    1494 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 12:29:47.161479    1494 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1216 12:29:48.961318    1494 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1216 12:29:48.961335    1494 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1216 12:29:48.961390    1494 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1216 12:29:48.961426    1494 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit
I1216 12:29:49.353394    1494 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900] Decompressors:map[bz2:0x140007b3450 gz:0x140007b3458 tar:0x140007b3400 tar.bz2:0x140007b3410 tar.gz:0x140007b3420 tar.xz:0x140007b3430 tar.zst:0x140007b3440 tbz2:0x140007b3410 tgz:0x140007b3420 txz:0x140007b3430 tzst:0x140007b3440 xz:0x140007b3460 zip:0x140007b3470 zst:0x140007b3468] Getters:map[file:0x14000905ee0 http:0x140006c73b0 https:0x140006c7400] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1216 12:29:49.353541    1494 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit
I1216 12:29:52.559461    1494 install.go:79] stdout: 
W1216 12:29:52.559579    1494 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1216 12:29:52.559598    1494 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit]
I1216 12:29:52.571354    1494 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit]
I1216 12:29:52.582920    1494 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit]
I1216 12:29:52.593626    1494 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-227000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.851224917s)

                                                
                                                
-- stdout --
	* [force-systemd-env-227000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-227000" primary control-plane node in "force-systemd-env-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:29:44.764960    5744 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:29:44.765134    5744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:44.765138    5744 out.go:358] Setting ErrFile to fd 2...
	I1216 12:29:44.765140    5744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:44.765258    5744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:29:44.766343    5744 out.go:352] Setting JSON to false
	I1216 12:29:44.783997    5744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3555,"bootTime":1734377429,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:29:44.784099    5744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:29:44.790386    5744 out.go:177] * [force-systemd-env-227000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:29:44.798297    5744 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:29:44.798333    5744 notify.go:220] Checking for updates...
	I1216 12:29:44.806259    5744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:29:44.809315    5744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:29:44.813152    5744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:29:44.816291    5744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:29:44.819315    5744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1216 12:29:44.822729    5744 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:44.822787    5744 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:29:44.826278    5744 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:29:44.834316    5744 start.go:297] selected driver: qemu2
	I1216 12:29:44.834330    5744 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:29:44.834338    5744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:29:44.836882    5744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:29:44.841387    5744 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:29:44.844370    5744 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 12:29:44.844383    5744 cni.go:84] Creating CNI manager for ""
	I1216 12:29:44.844404    5744 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:29:44.844413    5744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:29:44.844439    5744 start.go:340] cluster config:
	{Name:force-systemd-env-227000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:29:44.849050    5744 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:44.857210    5744 out.go:177] * Starting "force-systemd-env-227000" primary control-plane node in "force-systemd-env-227000" cluster
	I1216 12:29:44.861326    5744 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:29:44.861342    5744 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:29:44.861350    5744 cache.go:56] Caching tarball of preloaded images
	I1216 12:29:44.861421    5744 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:29:44.861429    5744 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:29:44.861505    5744 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/force-systemd-env-227000/config.json ...
	I1216 12:29:44.861518    5744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/force-systemd-env-227000/config.json: {Name:mk616ee7a45ce5a642f07f146000762a216e3d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:29:44.861977    5744 start.go:360] acquireMachinesLock for force-systemd-env-227000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:44.862031    5744 start.go:364] duration metric: took 45.417µs to acquireMachinesLock for "force-systemd-env-227000"
	I1216 12:29:44.862045    5744 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:44.862078    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:44.871303    5744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:44.889766    5744 start.go:159] libmachine.API.Create for "force-systemd-env-227000" (driver="qemu2")
	I1216 12:29:44.889794    5744 client.go:168] LocalClient.Create starting
	I1216 12:29:44.889877    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:44.889915    5744 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:44.889926    5744 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:44.889963    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:44.889993    5744 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:44.890002    5744 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:44.890474    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:45.050416    5744 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:45.173224    5744 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:45.173231    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:45.173466    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:45.183646    5744 main.go:141] libmachine: STDOUT: 
	I1216 12:29:45.183667    5744 main.go:141] libmachine: STDERR: 
	I1216 12:29:45.183724    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2 +20000M
	I1216 12:29:45.192153    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:45.192169    5744 main.go:141] libmachine: STDERR: 
	I1216 12:29:45.192193    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:45.192199    5744 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:45.192210    5744 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:45.192245    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:d7:41:b9:7e:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:45.194089    5744 main.go:141] libmachine: STDOUT: 
	I1216 12:29:45.194105    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:45.194123    5744 client.go:171] duration metric: took 304.320167ms to LocalClient.Create
	I1216 12:29:47.196257    5744 start.go:128] duration metric: took 2.334151375s to createHost
	I1216 12:29:47.196271    5744 start.go:83] releasing machines lock for "force-systemd-env-227000", held for 2.33421575s
	W1216 12:29:47.196282    5744 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:47.208709    5744 out.go:177] * Deleting "force-systemd-env-227000" in qemu2 ...
	W1216 12:29:47.222458    5744 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:47.222473    5744 start.go:729] Will try again in 5 seconds ...
	I1216 12:29:52.222859    5744 start.go:360] acquireMachinesLock for force-systemd-env-227000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:52.223458    5744 start.go:364] duration metric: took 430.458µs to acquireMachinesLock for "force-systemd-env-227000"
	I1216 12:29:52.223599    5744 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:52.223837    5744 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:52.245240    5744 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 12:29:52.293305    5744 start.go:159] libmachine.API.Create for "force-systemd-env-227000" (driver="qemu2")
	I1216 12:29:52.293379    5744 client.go:168] LocalClient.Create starting
	I1216 12:29:52.293537    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:52.293611    5744 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:52.293628    5744 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:52.293689    5744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:52.293749    5744 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:52.293765    5744 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:52.294332    5744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:52.465753    5744 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:52.517397    5744 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:52.517402    5744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:52.517625    5744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:52.527896    5744 main.go:141] libmachine: STDOUT: 
	I1216 12:29:52.527917    5744 main.go:141] libmachine: STDERR: 
	I1216 12:29:52.527976    5744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2 +20000M
	I1216 12:29:52.537392    5744 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:52.537420    5744 main.go:141] libmachine: STDERR: 
	I1216 12:29:52.537435    5744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:52.537442    5744 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:52.537449    5744 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:52.537491    5744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:48:d2:c5:de:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/force-systemd-env-227000/disk.qcow2
	I1216 12:29:52.539627    5744 main.go:141] libmachine: STDOUT: 
	I1216 12:29:52.539643    5744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:52.539657    5744 client.go:171] duration metric: took 246.269625ms to LocalClient.Create
	I1216 12:29:54.541966    5744 start.go:128] duration metric: took 2.31798875s to createHost
	I1216 12:29:54.542022    5744 start.go:83] releasing machines lock for "force-systemd-env-227000", held for 2.318521042s
	W1216 12:29:54.542304    5744 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:54.555884    5744 out.go:201] 
	W1216 12:29:54.560953    5744 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:29:54.560987    5744 out.go:270] * 
	* 
	W1216 12:29:54.563604    5744 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:29:54.570810    5744 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-227000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-227000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-227000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (108.309375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-227000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-227000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-16 12:29:54.692267 -0800 PST m=+3324.572376209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-227000 -n force-systemd-env-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-227000 -n force-systemd-env-227000: exit status 7 (43.638167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-227000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-227000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-227000
--- FAIL: TestForceSystemdEnv (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-278000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-278000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-j4drk" [18e45e78-def6-4fb2-9fc4-2de5c1eee469] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-j4drk" [18e45e78-def6-4fb2-9fc4-2de5c1eee469] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.008287708s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31555
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:42.292838    1494 retry.go:31] will retry after 848.148693ms: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:43.144783    1494 retry.go:31] will retry after 953.662043ms: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:44.101946    1494 retry.go:31] will retry after 3.04272359s: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:47.147173    1494 retry.go:31] will retry after 2.332855432s: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:49.483822    1494 retry.go:31] will retry after 5.445729065s: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
I1216 11:45:54.932453    1494 retry.go:31] will retry after 11.148928186s: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31555: Get "http://192.168.105.4:31555": dial tcp 192.168.105.4:31555: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-278000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-8449669db6-j4drk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-278000/192.168.105.4
Start Time:       Mon, 16 Dec 2024 11:45:28 -0800
Labels:           app=hello-node-connect
pod-template-hash=8449669db6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-8449669db6
Containers:
echoserver-arm:
Container ID:   docker://5ec9293aa45a2444e2932e13ef72bb53876791f19c0a253b0e16f9716b2a441a
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    255
Started:      Mon, 16 Dec 2024 11:45:54 -0800
Finished:     Mon, 16 Dec 2024 11:45:54 -0800
Last State:     Terminated
Reason:       Error
Exit Code:    255
Started:      Mon, 16 Dec 2024 11:45:36 -0800
Finished:     Mon, 16 Dec 2024 11:45:36 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqtjx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cqtjx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  37s                default-scheduler  Successfully assigned default/hello-node-connect-8449669db6-j4drk to functional-278000
Normal   Pulling    38s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.672s (7.455s including waiting). Image size: 84957542 bytes.
Normal   Created    12s (x3 over 30s)  kubelet            Created container: echoserver-arm
Normal   Started    12s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     12s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    11s (x3 over 29s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-8449669db6-j4drk_default(18e45e78-def6-4fb2-9fc4-2de5c1eee469)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-278000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-278000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.223.228
IPs:                      10.109.223.228
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31555/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-278000 -n functional-278000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST | 16 Dec 24 11:45 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh -- ls                                                                                          | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST | 16 Dec 24 11:45 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh cat                                                                                            | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:45 PST | 16 Dec 24 11:45 PST |
	|           | /mount-9p/test-1734378353412355000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh stat                                                                                           | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh stat                                                                                           | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh sudo                                                                                           | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1772062129/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh -- ls                                                                                          | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh sudo                                                                                           | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-278000 ssh findmnt                                                                                        | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST | 16 Dec 24 11:46 PST |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-278000                                                                                                 | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-278000 --dry-run                                                                                       | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-278000 | jenkins | v1.34.0 | 16 Dec 24 11:46 PST |                     |
	|           | -p functional-278000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:46:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:46:03.521873    2380 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:46:03.522035    2380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.522039    2380 out.go:358] Setting ErrFile to fd 2...
	I1216 11:46:03.522041    2380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.522155    2380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:46:03.523241    2380 out.go:352] Setting JSON to false
	I1216 11:46:03.541594    2380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":934,"bootTime":1734377429,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:46:03.541695    2380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:46:03.546240    2380 out.go:177] * [functional-278000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:46:03.554266    2380 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 11:46:03.554318    2380 notify.go:220] Checking for updates...
	I1216 11:46:03.561179    2380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:46:03.565277    2380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:46:03.568206    2380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:46:03.571236    2380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 11:46:03.574250    2380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:46:03.577499    2380 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:46:03.577765    2380 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:46:03.581163    2380 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 11:46:03.587191    2380 start.go:297] selected driver: qemu2
	I1216 11:46:03.587197    2380 start.go:901] validating driver "qemu2" against &{Name:functional-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:functional-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:46:03.587248    2380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:46:03.589663    2380 cni.go:84] Creating CNI manager for ""
	I1216 11:46:03.589700    2380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 11:46:03.589757    2380 start.go:340] cluster config:
	{Name:functional-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-278000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:46:03.602210    2380 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 16 19:45:57 functional-278000 dockerd[5835]: time="2024-12-16T19:45:57.295621483Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 16 19:45:57 functional-278000 dockerd[5835]: time="2024-12-16T19:45:57.306825020Z" level=warning msg="cleanup warnings time=\"2024-12-16T19:45:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 16 19:45:59 functional-278000 dockerd[5829]: time="2024-12-16T19:45:59.382466045Z" level=info msg="ignoring event" container=f2ea1eed228178959ac8b51a48db5b445cee486f37c9777fc11a4cb4600aba19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 16 19:45:59 functional-278000 dockerd[5835]: time="2024-12-16T19:45:59.382797031Z" level=info msg="shim disconnected" id=f2ea1eed228178959ac8b51a48db5b445cee486f37c9777fc11a4cb4600aba19 namespace=moby
	Dec 16 19:45:59 functional-278000 dockerd[5835]: time="2024-12-16T19:45:59.382944067Z" level=warning msg="cleaning up after shim disconnected" id=f2ea1eed228178959ac8b51a48db5b445cee486f37c9777fc11a4cb4600aba19 namespace=moby
	Dec 16 19:45:59 functional-278000 dockerd[5835]: time="2024-12-16T19:45:59.382949900Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.235010222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.235056303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.235062553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.235094427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:01 functional-278000 dockerd[5829]: time="2024-12-16T19:46:01.271096795Z" level=info msg="ignoring event" container=1d960ae2bc5b372902285938fae3f7568fb1fd4b1b9ff6aa9b8b6f0f3dbdd927 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.271347243Z" level=info msg="shim disconnected" id=1d960ae2bc5b372902285938fae3f7568fb1fd4b1b9ff6aa9b8b6f0f3dbdd927 namespace=moby
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.271404116Z" level=warning msg="cleaning up after shim disconnected" id=1d960ae2bc5b372902285938fae3f7568fb1fd4b1b9ff6aa9b8b6f0f3dbdd927 namespace=moby
	Dec 16 19:46:01 functional-278000 dockerd[5835]: time="2024-12-16T19:46:01.271409324Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.524591602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.524686140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.524702889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.524741346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.536683388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.536710387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.536715886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:04 functional-278000 dockerd[5835]: time="2024-12-16T19:46:04.536848256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 16 19:46:04 functional-278000 cri-dockerd[6110]: time="2024-12-16T19:46:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/923f99d7a692b02d4934d7cdbd25507b5fce851dd8512917ac2808dc3e3ca708/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 16 19:46:04 functional-278000 cri-dockerd[6110]: time="2024-12-16T19:46:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de4d4f58ad1bfcf6d982f57c0d50aea0f4e3e73b60490d357a05560e2b9f6aaa/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 16 19:46:04 functional-278000 dockerd[5829]: time="2024-12-16T19:46:04.833041061Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=542a577c4b6c1f86 traceID=7dc9b58d7ab14b940557cbc6a52a084f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1d960ae2bc5b3       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            2                   730781b3a4988       hello-node-64fc58db8c-s44pb
	5c58fd39ccff6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   f2ea1eed22817       busybox-mount
	5ec9293aa45a2       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   c2093afb55b31       hello-node-connect-8449669db6-j4drk
	fa1e908106043       nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be                         26 seconds ago       Running             myfrontend                0                   143b42c71e0da       sp-pod
	be774c6e2efb7       nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         44 seconds ago       Running             nginx                     0                   90e731a585af2       nginx-svc
	8968306a103b2       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   018959f92470a       coredns-668d6bf9bc-s8kx9
	6b3873b7f8249       2f50386e20bfd                                                                                         About a minute ago   Running             kube-proxy                2                   ccf7ad8450f70       kube-proxy-hhzfq
	6920344b4dfbf       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   c9cb96f52b1f9       storage-provisioner
	bfb9b054d12d3       a8d049396f6b8                                                                                         About a minute ago   Running             kube-controller-manager   2                   cd1c4561d1c32       kube-controller-manager-functional-278000
	ba106f5abef04       7fc9d4aa817aa                                                                                         About a minute ago   Running             etcd                      2                   db90ef84b04f8       etcd-functional-278000
	2c36759ba5d0c       c3ff26fb59f37                                                                                         About a minute ago   Running             kube-scheduler            2                   0827be7924975       kube-scheduler-functional-278000
	822ac56d1fa32       2b5bd0f16085a                                                                                         About a minute ago   Running             kube-apiserver            0                   99e952000ac38       kube-apiserver-functional-278000
	39ed8f6b14cc4       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   8b010af1d6f41       coredns-668d6bf9bc-s8kx9
	09b7cd5503f2f       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   1bbc38f5ebff8       storage-provisioner
	1f3583f4e8383       2f50386e20bfd                                                                                         2 minutes ago        Exited              kube-proxy                1                   b798838be01ba       kube-proxy-hhzfq
	63af1c83c8e0c       7fc9d4aa817aa                                                                                         2 minutes ago        Exited              etcd                      1                   3d8b9f47c0131       etcd-functional-278000
	c8a68d192245f       a8d049396f6b8                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   3332b9cafbd40       kube-controller-manager-functional-278000
	8e67859df9469       c3ff26fb59f37                                                                                         2 minutes ago        Exited              kube-scheduler            1                   43605b5c0e0d7       kube-scheduler-functional-278000
	
	
	==> coredns [39ed8f6b14cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3409057fd96ced495c54dfbc11c46c37cafb7915196e2732d6ff3dd9cada5deb0c590f4db58bfc38d19787f246b979122fa34cab1d3a970e57ae724a4727661f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50064 - 64603 "HINFO IN 4412105656976810953.3497092018656619773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010111792s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8968306a103b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3409057fd96ced495c54dfbc11c46c37cafb7915196e2732d6ff3dd9cada5deb0c590f4db58bfc38d19787f246b979122fa34cab1d3a970e57ae724a4727661f
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59572 - 65451 "HINFO IN 502761838966601855.8700533506464818424. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010910908s
	[INFO] 10.244.0.1:61193 - 8795 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000096329s
	[INFO] 10.244.0.1:14003 - 10501 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000102704s
	[INFO] 10.244.0.1:47511 - 5139 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001552765s
	[INFO] 10.244.0.1:20301 - 47805 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000114411s
	[INFO] 10.244.0.1:29606 - 13565 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00009562s
	[INFO] 10.244.0.1:36309 - 55747 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000290029s
	
	
	==> describe nodes <==
	Name:               functional-278000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-278000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=functional-278000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T11_43_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 19:43:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-278000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 19:46:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 19:45:51 +0000   Mon, 16 Dec 2024 19:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 19:45:51 +0000   Mon, 16 Dec 2024 19:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 19:45:51 +0000   Mon, 16 Dec 2024 19:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 19:45:51 +0000   Mon, 16 Dec 2024 19:43:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-278000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904864Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904864Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bbebc6861e44481b194087725cb54f4
	  System UUID:                9bbebc6861e44481b194087725cb54f4
	  Boot ID:                    2fe3be95-e450-4c91-88fd-328b69153e1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64fc58db8c-s44pb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  default                     hello-node-connect-8449669db6-j4drk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 coredns-668d6bf9bc-s8kx9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m30s
	  kube-system                 etcd-functional-278000                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m36s
	  kube-system                 kube-apiserver-functional-278000              250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-functional-278000     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-hhzfq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-scheduler-functional-278000              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-gnkqw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-t5b5w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m29s                kube-proxy       
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  Starting                 2m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m36s                kubelet          Node functional-278000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m36s                kubelet          Node functional-278000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s                kubelet          Node functional-278000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m32s                kubelet          Node functional-278000 status is now: NodeReady
	  Normal  RegisteredNode           2m31s                node-controller  Node functional-278000 event: Registered Node functional-278000 in Controller
	  Normal  CIDRAssignmentFailed     2m31s                cidrAllocator    Node functional-278000 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node functional-278000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node functional-278000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node functional-278000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                 node-controller  Node functional-278000 event: Registered Node functional-278000 in Controller
	  Normal  Starting                 78s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)    kubelet          Node functional-278000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)    kubelet          Node functional-278000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)    kubelet          Node functional-278000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                  node-controller  Node functional-278000 event: Registered Node functional-278000 in Controller
	
	
	==> dmesg <==
	[  +8.848563] systemd-fstab-generator[4917]: Ignoring "noauto" option for root device
	[ +10.968381] systemd-fstab-generator[5346]: Ignoring "noauto" option for root device
	[  +0.051842] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.119506] systemd-fstab-generator[5380]: Ignoring "noauto" option for root device
	[  +0.113900] systemd-fstab-generator[5392]: Ignoring "noauto" option for root device
	[  +0.110213] systemd-fstab-generator[5406]: Ignoring "noauto" option for root device
	[  +5.128171] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.439879] systemd-fstab-generator[6063]: Ignoring "noauto" option for root device
	[  +0.090773] systemd-fstab-generator[6075]: Ignoring "noauto" option for root device
	[  +0.104282] systemd-fstab-generator[6087]: Ignoring "noauto" option for root device
	[  +0.111341] systemd-fstab-generator[6102]: Ignoring "noauto" option for root device
	[  +0.228542] systemd-fstab-generator[6270]: Ignoring "noauto" option for root device
	[  +0.964437] systemd-fstab-generator[6391]: Ignoring "noauto" option for root device
	[  +3.407659] kauditd_printk_skb: 203 callbacks suppressed
	[  +6.580204] kauditd_printk_skb: 33 callbacks suppressed
	[Dec16 19:45] systemd-fstab-generator[7492]: Ignoring "noauto" option for root device
	[  +5.060237] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.025387] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.035893] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.621437] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.733225] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.984020] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.103284] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.022859] kauditd_printk_skb: 14 callbacks suppressed
	[Dec16 19:46] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [63af1c83c8e0] <==
	{"level":"info","ts":"2024-12-16T19:44:04.333215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T19:44:04.333267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-12-16T19:44:04.333293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:04.333319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:04.333337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:04.333441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:04.337037Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-278000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T19:44:04.337146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T19:44:04.337687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T19:44:04.337904Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T19:44:04.337803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T19:44:04.338561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T19:44:04.338807Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T19:44:04.339830Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T19:44:04.339884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-12-16T19:44:34.135883Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-16T19:44:34.135916Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-278000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-12-16T19:44:34.135967Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T19:44:34.135979Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T19:44:34.136009Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T19:44:34.136042Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-16T19:44:34.144948Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-12-16T19:44:34.146709Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-16T19:44:34.146748Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-16T19:44:34.146752Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-278000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ba106f5abef0] <==
	{"level":"info","ts":"2024-12-16T19:44:48.949990Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-12-16T19:44:48.950048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T19:44:48.950079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T19:44:48.950903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T19:44:48.951600Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T19:44:48.951661Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-16T19:44:48.951713Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-16T19:44:48.951975Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T19:44:48.952005Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T19:44:49.946988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:49.947138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:49.947603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-16T19:44:49.947667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-12-16T19:44:49.947688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-16T19:44:49.947750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-12-16T19:44:49.947797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-16T19:44:49.950538Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-278000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T19:44:49.950790Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T19:44:49.950942Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T19:44:49.950983Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T19:44:49.951021Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T19:44:49.952630Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T19:44:49.952648Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T19:44:49.954295Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-12-16T19:44:49.955372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:46:06 up 2 min,  0 users,  load average: 1.58, 0.90, 0.36
	Linux functional-278000 5.10.207 #1 SMP PREEMPT Thu Dec 12 20:40:31 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [822ac56d1fa3] <==
	I1216 19:44:50.542382       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 19:44:50.542389       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 19:44:50.542422       1 cache.go:39] Caches are synced for autoregister controller
	I1216 19:44:50.558554       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1216 19:44:50.573488       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1216 19:44:50.573543       1 policy_source.go:240] refreshing policies
	I1216 19:44:50.588003       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 19:44:51.266082       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1216 19:44:51.433469       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 19:44:51.876043       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1216 19:44:51.888027       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1216 19:44:51.894653       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 19:44:51.896539       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 19:44:53.961500       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1216 19:44:54.010597       1 controller.go:615] quota admission added evaluator for: endpoints
	I1216 19:44:54.119531       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 19:45:12.369947       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.163.37"}
	I1216 19:45:18.790717       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.123.247"}
	I1216 19:45:28.256848       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.223.228"}
	E1216 19:45:38.078863       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49696: use of closed network connection
	E1216 19:45:46.083841       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49706: use of closed network connection
	I1216 19:45:46.166077       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.221.113"}
	I1216 19:46:04.134220       1 controller.go:615] quota admission added evaluator for: namespaces
	I1216 19:46:04.215503       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.143.246"}
	I1216 19:46:04.223840       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.216.5"}
	
	
	==> kube-controller-manager [bfb9b054d12d] <==
	I1216 19:45:46.139775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64fc58db8c" duration="18.874µs"
	I1216 19:45:47.064033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64fc58db8c" duration="39.331µs"
	I1216 19:45:48.097464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64fc58db8c" duration="43.081µs"
	I1216 19:45:51.714377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-278000"
	I1216 19:45:55.201226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-8449669db6" duration="37.248µs"
	I1216 19:46:01.222021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64fc58db8c" duration="36.873µs"
	I1216 19:46:01.315044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64fc58db8c" duration="29.29µs"
	I1216 19:46:04.161111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.762717ms"
	E1216 19:46:04.161137       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 19:46:04.166693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="8.737051ms"
	E1216 19:46:04.167024       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 19:46:04.167520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.022915ms"
	E1216 19:46:04.167570       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 19:46:04.171638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="2.244031ms"
	E1216 19:46:04.172029       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 19:46:04.173410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="4.145868ms"
	E1216 19:46:04.173452       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 19:46:04.184296       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="8.930209ms"
	I1216 19:46:04.188558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="4.214074ms"
	I1216 19:46:04.188654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="38.956µs"
	I1216 19:46:04.195440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="16.088952ms"
	I1216 19:46:04.199170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="15.041µs"
	I1216 19:46:04.203292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.828047ms"
	I1216 19:46:04.209847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.480854ms"
	I1216 19:46:04.210769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="24.29µs"
	
	
	==> kube-controller-manager [c8a68d192245] <==
	I1216 19:44:08.067240       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 19:44:08.067243       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1216 19:44:08.067275       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1216 19:44:08.067316       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-278000"
	I1216 19:44:08.068281       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1216 19:44:08.070476       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 19:44:08.070505       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 19:44:08.070512       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 19:44:08.072993       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1216 19:44:08.073034       1 shared_informer.go:320] Caches are synced for HPA
	I1216 19:44:08.098301       1 shared_informer.go:320] Caches are synced for stateful set
	I1216 19:44:08.098334       1 shared_informer.go:320] Caches are synced for cronjob
	I1216 19:44:08.098395       1 shared_informer.go:320] Caches are synced for TTL
	I1216 19:44:08.098433       1 shared_informer.go:320] Caches are synced for attach detach
	I1216 19:44:08.098541       1 shared_informer.go:320] Caches are synced for endpoint
	I1216 19:44:08.098572       1 shared_informer.go:320] Caches are synced for service account
	I1216 19:44:08.098439       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1216 19:44:08.098443       1 shared_informer.go:320] Caches are synced for job
	I1216 19:44:08.102672       1 shared_informer.go:320] Caches are synced for resource quota
	I1216 19:44:08.102675       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1216 19:44:08.108780       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 19:44:08.400732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="327.695134ms"
	I1216 19:44:08.401908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.996µs"
	I1216 19:44:13.828705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.390526ms"
	I1216 19:44:13.830363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="27.246µs"
	
	
	==> kube-proxy [1f3583f4e838] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 19:44:06.120504       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 19:44:06.129426       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1216 19:44:06.129456       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 19:44:06.152302       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 19:44:06.152322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 19:44:06.152339       1 server_linux.go:170] "Using iptables Proxier"
	I1216 19:44:06.153035       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 19:44:06.153135       1 server.go:497] "Version info" version="v1.32.0"
	I1216 19:44:06.153141       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:44:06.153934       1 config.go:199] "Starting service config controller"
	I1216 19:44:06.153939       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 19:44:06.153948       1 config.go:105] "Starting endpoint slice config controller"
	I1216 19:44:06.153950       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 19:44:06.154104       1 config.go:329] "Starting node config controller"
	I1216 19:44:06.154106       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 19:44:06.254650       1 shared_informer.go:320] Caches are synced for node config
	I1216 19:44:06.254653       1 shared_informer.go:320] Caches are synced for service config
	I1216 19:44:06.254662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6b3873b7f824] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 19:44:51.736080       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 19:44:51.739731       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1216 19:44:51.739756       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 19:44:51.746912       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 19:44:51.746928       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 19:44:51.746939       1 server_linux.go:170] "Using iptables Proxier"
	I1216 19:44:51.747585       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 19:44:51.747665       1 server.go:497] "Version info" version="v1.32.0"
	I1216 19:44:51.747671       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:44:51.748103       1 config.go:199] "Starting service config controller"
	I1216 19:44:51.748111       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 19:44:51.748120       1 config.go:105] "Starting endpoint slice config controller"
	I1216 19:44:51.748122       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 19:44:51.748292       1 config.go:329] "Starting node config controller"
	I1216 19:44:51.748294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 19:44:51.848308       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 19:44:51.848328       1 shared_informer.go:320] Caches are synced for service config
	I1216 19:44:51.848438       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c36759ba5d0] <==
	I1216 19:44:49.235631       1 serving.go:386] Generated self-signed cert in-memory
	W1216 19:44:50.450991       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 19:44:50.451146       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 19:44:50.451191       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 19:44:50.451206       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 19:44:50.490856       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 19:44:50.490873       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:44:50.491860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1216 19:44:50.491926       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 19:44:50.491939       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 19:44:50.491971       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 19:44:50.592429       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8e67859df946] <==
	I1216 19:44:03.847734       1 serving.go:386] Generated self-signed cert in-memory
	W1216 19:44:04.841735       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 19:44:04.841780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 19:44:04.841794       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 19:44:04.841802       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 19:44:04.870903       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 19:44:04.870914       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:44:04.871820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 19:44:04.871864       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 19:44:04.871920       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1216 19:44:04.871947       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 19:44:04.972069       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1216 19:44:34.122112       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 16 19:45:48 functional-278000 kubelet[6398]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 19:45:48 functional-278000 kubelet[6398]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 19:45:48 functional-278000 kubelet[6398]: I1216 19:45:48.298491    6398 scope.go:117] "RemoveContainer" containerID="62e4126ea576195ac5b92457ea6ecf8953c51baae4c7c53bc5f01e96e9c5141a"
	Dec 16 19:45:54 functional-278000 kubelet[6398]: I1216 19:45:54.210270    6398 scope.go:117] "RemoveContainer" containerID="307b35f03e8d0f266097fbac3937621e2bf52cd58bb1b85eccf375206029dd08"
	Dec 16 19:45:55 functional-278000 kubelet[6398]: I1216 19:45:55.194426    6398 scope.go:117] "RemoveContainer" containerID="307b35f03e8d0f266097fbac3937621e2bf52cd58bb1b85eccf375206029dd08"
	Dec 16 19:45:55 functional-278000 kubelet[6398]: I1216 19:45:55.194590    6398 scope.go:117] "RemoveContainer" containerID="5ec9293aa45a2444e2932e13ef72bb53876791f19c0a253b0e16f9716b2a441a"
	Dec 16 19:45:55 functional-278000 kubelet[6398]: E1216 19:45:55.194658    6398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-8449669db6-j4drk_default(18e45e78-def6-4fb2-9fc4-2de5c1eee469)\"" pod="default/hello-node-connect-8449669db6-j4drk" podUID="18e45e78-def6-4fb2-9fc4-2de5c1eee469"
	Dec 16 19:45:55 functional-278000 kubelet[6398]: I1216 19:45:55.389940    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/33400b94-a07a-4886-961e-f9b2b22f38c4-test-volume\") pod \"busybox-mount\" (UID: \"33400b94-a07a-4886-961e-f9b2b22f38c4\") " pod="default/busybox-mount"
	Dec 16 19:45:55 functional-278000 kubelet[6398]: I1216 19:45:55.389973    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfnr\" (UniqueName: \"kubernetes.io/projected/33400b94-a07a-4886-961e-f9b2b22f38c4-kube-api-access-kqfnr\") pod \"busybox-mount\" (UID: \"33400b94-a07a-4886-961e-f9b2b22f38c4\") " pod="default/busybox-mount"
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.433115    6398 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/33400b94-a07a-4886-961e-f9b2b22f38c4-test-volume\") pod \"33400b94-a07a-4886-961e-f9b2b22f38c4\" (UID: \"33400b94-a07a-4886-961e-f9b2b22f38c4\") "
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.433140    6398 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfnr\" (UniqueName: \"kubernetes.io/projected/33400b94-a07a-4886-961e-f9b2b22f38c4-kube-api-access-kqfnr\") pod \"33400b94-a07a-4886-961e-f9b2b22f38c4\" (UID: \"33400b94-a07a-4886-961e-f9b2b22f38c4\") "
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.433312    6398 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33400b94-a07a-4886-961e-f9b2b22f38c4-test-volume" (OuterVolumeSpecName: "test-volume") pod "33400b94-a07a-4886-961e-f9b2b22f38c4" (UID: "33400b94-a07a-4886-961e-f9b2b22f38c4"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.435884    6398 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33400b94-a07a-4886-961e-f9b2b22f38c4-kube-api-access-kqfnr" (OuterVolumeSpecName: "kube-api-access-kqfnr") pod "33400b94-a07a-4886-961e-f9b2b22f38c4" (UID: "33400b94-a07a-4886-961e-f9b2b22f38c4"). InnerVolumeSpecName "kube-api-access-kqfnr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.534061    6398 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/33400b94-a07a-4886-961e-f9b2b22f38c4-test-volume\") on node \"functional-278000\" DevicePath \"\""
	Dec 16 19:45:59 functional-278000 kubelet[6398]: I1216 19:45:59.534075    6398 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqfnr\" (UniqueName: \"kubernetes.io/projected/33400b94-a07a-4886-961e-f9b2b22f38c4-kube-api-access-kqfnr\") on node \"functional-278000\" DevicePath \"\""
	Dec 16 19:46:00 functional-278000 kubelet[6398]: I1216 19:46:00.297683    6398 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ea1eed228178959ac8b51a48db5b445cee486f37c9777fc11a4cb4600aba19"
	Dec 16 19:46:01 functional-278000 kubelet[6398]: I1216 19:46:01.209926    6398 scope.go:117] "RemoveContainer" containerID="fc5c9b717b6d28b30da7bba6a9d7192e33d37159ff05eb53bfa647957db7fee2"
	Dec 16 19:46:01 functional-278000 kubelet[6398]: I1216 19:46:01.308779    6398 scope.go:117] "RemoveContainer" containerID="fc5c9b717b6d28b30da7bba6a9d7192e33d37159ff05eb53bfa647957db7fee2"
	Dec 16 19:46:01 functional-278000 kubelet[6398]: I1216 19:46:01.308950    6398 scope.go:117] "RemoveContainer" containerID="1d960ae2bc5b372902285938fae3f7568fb1fd4b1b9ff6aa9b8b6f0f3dbdd927"
	Dec 16 19:46:01 functional-278000 kubelet[6398]: E1216 19:46:01.309024    6398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64fc58db8c-s44pb_default(4b6e456e-74d1-47ba-bf25-663e0625ab5d)\"" pod="default/hello-node-64fc58db8c-s44pb" podUID="4b6e456e-74d1-47ba-bf25-663e0625ab5d"
	Dec 16 19:46:04 functional-278000 kubelet[6398]: I1216 19:46:04.183310    6398 memory_manager.go:355] "RemoveStaleState removing state" podUID="33400b94-a07a-4886-961e-f9b2b22f38c4" containerName="mount-munger"
	Dec 16 19:46:04 functional-278000 kubelet[6398]: I1216 19:46:04.271693    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9117d1fb-38ec-4565-943a-11015686a082-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-gnkqw\" (UID: \"9117d1fb-38ec-4565-943a-11015686a082\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-gnkqw"
	Dec 16 19:46:04 functional-278000 kubelet[6398]: I1216 19:46:04.271734    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpscp\" (UniqueName: \"kubernetes.io/projected/5088aee6-1a8b-4536-b444-51c6186aca7e-kube-api-access-lpscp\") pod \"kubernetes-dashboard-7779f9b69b-t5b5w\" (UID: \"5088aee6-1a8b-4536-b444-51c6186aca7e\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-t5b5w"
	Dec 16 19:46:04 functional-278000 kubelet[6398]: I1216 19:46:04.271747    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t27s\" (UniqueName: \"kubernetes.io/projected/9117d1fb-38ec-4565-943a-11015686a082-kube-api-access-7t27s\") pod \"dashboard-metrics-scraper-5d59dccf9b-gnkqw\" (UID: \"9117d1fb-38ec-4565-943a-11015686a082\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-gnkqw"
	Dec 16 19:46:04 functional-278000 kubelet[6398]: I1216 19:46:04.271757    6398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5088aee6-1a8b-4536-b444-51c6186aca7e-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-t5b5w\" (UID: \"5088aee6-1a8b-4536-b444-51c6186aca7e\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-t5b5w"
	
	
	==> storage-provisioner [09b7cd5503f2] <==
	I1216 19:44:06.135323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 19:44:06.139146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 19:44:06.139167       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 19:44:23.553720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 19:44:23.554712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd1f4abc-37d4-4a66-970c-03958403fad8", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-278000_c38c1924-d0ac-4036-9401-5c5e45465b43 became leader
	I1216 19:44:23.554856       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-278000_c38c1924-d0ac-4036-9401-5c5e45465b43!
	I1216 19:44:23.659555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-278000_c38c1924-d0ac-4036-9401-5c5e45465b43!
	
	
	==> storage-provisioner [6920344b4dfb] <==
	I1216 19:44:51.669436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 19:44:51.677003       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 19:44:51.677040       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 19:45:09.091699       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 19:45:09.092010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd1f4abc-37d4-4a66-970c-03958403fad8", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-278000_fbec1684-8b28-4b44-b970-11be624e7fb2 became leader
	I1216 19:45:09.092754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-278000_fbec1684-8b28-4b44-b970-11be624e7fb2!
	I1216 19:45:09.193329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-278000_fbec1684-8b28-4b44-b970-11be624e7fb2!
	I1216 19:45:26.911504       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1216 19:45:26.911837       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    5441000f-8252-494a-b951-ca9c1b002ba0 346 0 2024-12-16 19:43:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-16 19:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-41280759-6f2f-41ba-99d9-10eecf851687 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  41280759-6f2f-41ba-99d9-10eecf851687 659 0 2024-12-16 19:45:26 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-16 19:45:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-16 19:45:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1216 19:45:26.912231       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-41280759-6f2f-41ba-99d9-10eecf851687" provisioned
	I1216 19:45:26.912239       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1216 19:45:26.912242       1 volume_store.go:212] Trying to save persistentvolume "pvc-41280759-6f2f-41ba-99d9-10eecf851687"
	I1216 19:45:26.912752       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"41280759-6f2f-41ba-99d9-10eecf851687", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1216 19:45:26.916401       1 volume_store.go:219] persistentvolume "pvc-41280759-6f2f-41ba-99d9-10eecf851687" saved
	I1216 19:45:26.916676       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"41280759-6f2f-41ba-99d9-10eecf851687", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-41280759-6f2f-41ba-99d9-10eecf851687
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-278000 -n functional-278000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-278000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5d59dccf9b-gnkqw kubernetes-dashboard-7779f9b69b-t5b5w
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-278000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-gnkqw kubernetes-dashboard-7779f9b69b-t5b5w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-278000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-gnkqw kubernetes-dashboard-7779f9b69b-t5b5w: exit status 1 (42.328333ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-278000/192.168.105.4
	Start Time:       Mon, 16 Dec 2024 11:45:55 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://5c58fd39ccff6ebb16bd80c11ae5ddaa6f474ee8cc0b13d1441379ffe0de9cef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 16 Dec 2024 11:45:57 -0800
	      Finished:     Mon, 16 Dec 2024 11:45:57 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqfnr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kqfnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-278000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.495s (1.495s including waiting). Image size: 3547125 bytes.
	  Normal  Created    10s   kubelet            Created container: mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-gnkqw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-t5b5w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-278000 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-gnkqw kubernetes-dashboard-7779f9b69b-t5b5w: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (250.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-922000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1216 11:48:28.328099    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:48:56.056788    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.598612    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.606206    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.619541    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.642875    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.686208    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.769533    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:18.932862    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:19.256187    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:19.899531    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:21.182882    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:23.746290    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-922000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (4m10.459191875s)

                                                
                                                
-- stdout --
	* [ha-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-922000" primary control-plane node in "ha-922000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:
	
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd 
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd 
	
	
	* Successfully unblocked bootpd process from firewall, retrying
	* Deleting "ha-922000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:
	
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd 
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd 
	
	
	* Successfully unblocked bootpd process from firewall, retrying
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:46:13.739543    2435 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:46:13.739687    2435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:13.739690    2435 out.go:358] Setting ErrFile to fd 2...
	I1216 11:46:13.739692    2435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:13.739822    2435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:46:13.740992    2435 out.go:352] Setting JSON to false
	I1216 11:46:13.759998    2435 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":944,"bootTime":1734377429,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:46:13.760079    2435 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:46:13.763654    2435 out.go:177] * [ha-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:46:13.770603    2435 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 11:46:13.770632    2435 notify.go:220] Checking for updates...
	I1216 11:46:13.778669    2435 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:46:13.781686    2435 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:46:13.784668    2435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:46:13.791647    2435 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 11:46:13.799478    2435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:46:13.802965    2435 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:46:13.806661    2435 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 11:46:13.812666    2435 start.go:297] selected driver: qemu2
	I1216 11:46:13.812674    2435 start.go:901] validating driver "qemu2" against <nil>
	I1216 11:46:13.812682    2435 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:46:13.815671    2435 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:46:13.818680    2435 out.go:177] * Automatically selected the socket_vmnet network
	I1216 11:46:13.820070    2435 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:46:13.820088    2435 cni.go:84] Creating CNI manager for ""
	I1216 11:46:13.820111    2435 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1216 11:46:13.820120    2435 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 11:46:13.820147    2435 start.go:340] cluster config:
	{Name:ha-922000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:46:13.825044    2435 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:46:13.833764    2435 out.go:177] * Starting "ha-922000" primary control-plane node in "ha-922000" cluster
	I1216 11:46:13.837646    2435 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 11:46:13.837661    2435 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 11:46:13.837673    2435 cache.go:56] Caching tarball of preloaded images
	I1216 11:46:13.837783    2435 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 11:46:13.837790    2435 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 11:46:13.838004    2435 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/ha-922000/config.json ...
	I1216 11:46:13.838017    2435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/ha-922000/config.json: {Name:mk38faba2a66773553a85c4df09e9d15f7b5c1b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:46:13.838542    2435 start.go:360] acquireMachinesLock for ha-922000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:46:13.838592    2435 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "ha-922000"
	I1216 11:46:13.838606    2435 start.go:93] Provisioning new machine with config: &{Name:ha-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.32.0 ClusterName:ha-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 11:46:13.838658    2435 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 11:46:13.846665    2435 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 11:46:13.871319    2435 start.go:159] libmachine.API.Create for "ha-922000" (driver="qemu2")
	I1216 11:46:13.871363    2435 client.go:168] LocalClient.Create starting
	I1216 11:46:13.871439    2435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 11:46:13.871475    2435 main.go:141] libmachine: Decoding PEM data...
	I1216 11:46:13.871490    2435 main.go:141] libmachine: Parsing certificate...
	I1216 11:46:13.871529    2435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 11:46:13.871558    2435 main.go:141] libmachine: Decoding PEM data...
	I1216 11:46:13.871566    2435 main.go:141] libmachine: Parsing certificate...
	I1216 11:46:13.872074    2435 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 11:46:14.084113    2435 main.go:141] libmachine: Creating SSH key...
	I1216 11:46:14.242452    2435 main.go:141] libmachine: Creating Disk image...
	I1216 11:46:14.242460    2435 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 11:46:14.242690    2435 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:46:14.258739    2435 main.go:141] libmachine: STDOUT: 
	I1216 11:46:14.258761    2435 main.go:141] libmachine: STDERR: 
	I1216 11:46:14.258838    2435 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2 +20000M
	I1216 11:46:14.267403    2435 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 11:46:14.267418    2435 main.go:141] libmachine: STDERR: 
	I1216 11:46:14.267438    2435 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:46:14.267442    2435 main.go:141] libmachine: Starting QEMU VM...
	I1216 11:46:14.267451    2435 qemu.go:418] Using hvf for hardware acceleration
	I1216 11:46:14.267484    2435 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:25:9c:0d:eb:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:46:14.316711    2435 main.go:141] libmachine: STDOUT: 
	I1216 11:46:14.316738    2435 main.go:141] libmachine: STDERR: 
	I1216 11:46:14.316742    2435 main.go:141] libmachine: Attempt 0
	I1216 11:46:14.316774    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:14.316888    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:14.316902    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:14.316909    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:14.316925    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:14.316931    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:16.319081    2435 main.go:141] libmachine: Attempt 1
	I1216 11:46:16.319306    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:16.319768    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:16.319830    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:16.319872    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:16.319902    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:16.319937    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:18.322211    2435 main.go:141] libmachine: Attempt 2
	I1216 11:46:18.322297    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:18.322735    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:18.322790    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:18.322823    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:18.322851    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:18.322878    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:20.325064    2435 main.go:141] libmachine: Attempt 3
	I1216 11:46:20.325114    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:20.325245    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:20.325259    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:20.325264    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:20.325269    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:20.325274    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:22.327308    2435 main.go:141] libmachine: Attempt 4
	I1216 11:46:22.327321    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:22.327367    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:22.327377    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:22.327382    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:22.327388    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:22.327394    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:24.329468    2435 main.go:141] libmachine: Attempt 5
	I1216 11:46:24.329501    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:24.329568    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:24.329579    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:24.329584    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:24.329589    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:24.329594    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:26.331649    2435 main.go:141] libmachine: Attempt 6
	I1216 11:46:26.331679    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:26.331787    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:26.331802    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:26.331808    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:26.331827    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:26.331832    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:28.332874    2435 main.go:141] libmachine: Attempt 7
	I1216 11:46:28.332934    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:28.333054    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:28.333073    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:28.333080    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:28.333086    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:28.333092    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:30.335121    2435 main.go:141] libmachine: Attempt 8
	I1216 11:46:30.335142    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:30.335195    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:30.335202    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:30.335213    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:30.335219    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:30.335224    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:32.337237    2435 main.go:141] libmachine: Attempt 9
	I1216 11:46:32.337251    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:32.337292    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:32.337297    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:32.337304    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:32.337309    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:32.337313    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:34.339352    2435 main.go:141] libmachine: Attempt 10
	I1216 11:46:34.339388    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:34.339448    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:34.339461    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:34.339468    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:34.339473    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:34.339478    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:36.341525    2435 main.go:141] libmachine: Attempt 11
	I1216 11:46:36.341536    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:36.341583    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:36.341591    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:36.341600    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:36.341607    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:36.341614    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:38.343637    2435 main.go:141] libmachine: Attempt 12
	I1216 11:46:38.343650    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:38.343692    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:38.343698    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:38.343704    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:38.343710    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:38.343715    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:40.345729    2435 main.go:141] libmachine: Attempt 13
	I1216 11:46:40.345749    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:40.345806    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:40.345818    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:40.345824    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:40.345830    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:40.345834    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:42.347881    2435 main.go:141] libmachine: Attempt 14
	I1216 11:46:42.347907    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:42.347983    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:42.347997    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:42.348003    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:42.348008    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:42.348019    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:44.350067    2435 main.go:141] libmachine: Attempt 15
	I1216 11:46:44.350082    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:44.350126    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:44.350137    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:44.350145    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:44.350150    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:44.350156    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:46.352207    2435 main.go:141] libmachine: Attempt 16
	I1216 11:46:46.352247    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:46.352334    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:46.352349    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:46.352354    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:46.352360    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:46.352366    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:48.354393    2435 main.go:141] libmachine: Attempt 17
	I1216 11:46:48.354402    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:48.354443    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:48.354460    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:48.354465    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:48.354471    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:48.354476    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:50.356584    2435 main.go:141] libmachine: Attempt 18
	I1216 11:46:50.356617    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:50.356733    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:50.356751    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:50.356758    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:50.356764    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:50.356769    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:52.358840    2435 main.go:141] libmachine: Attempt 19
	I1216 11:46:52.358890    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:52.358958    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:52.358971    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:52.358977    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:52.358983    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:52.358989    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:54.360529    2435 main.go:141] libmachine: Attempt 20
	I1216 11:46:54.360561    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:54.360636    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:54.360645    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:54.360651    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:54.360656    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:54.360663    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:56.362734    2435 main.go:141] libmachine: Attempt 21
	I1216 11:46:56.362753    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:56.362838    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:56.362851    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:56.362858    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:56.362862    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:56.362869    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:46:58.364898    2435 main.go:141] libmachine: Attempt 22
	I1216 11:46:58.364907    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:46:58.364955    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:46:58.364962    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:46:58.364966    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:46:58.364979    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:46:58.364984    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:00.367017    2435 main.go:141] libmachine: Attempt 23
	I1216 11:47:00.367041    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:00.367114    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:00.367124    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:00.367130    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:00.367136    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:00.367142    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:02.369186    2435 main.go:141] libmachine: Attempt 24
	I1216 11:47:02.369197    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:02.369239    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:02.369246    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:02.369251    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:02.369256    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:02.369262    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:04.371279    2435 main.go:141] libmachine: Attempt 25
	I1216 11:47:04.371287    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:04.371346    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:04.371362    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:04.371369    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:04.371374    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:04.371379    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:06.373395    2435 main.go:141] libmachine: Attempt 26
	I1216 11:47:06.373402    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:06.373436    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:06.373444    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:06.373448    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:06.373453    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:06.373459    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:08.375473    2435 main.go:141] libmachine: Attempt 27
	I1216 11:47:08.375488    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:08.375520    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:08.375527    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:08.375533    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:08.375553    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:08.375565    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:10.377589    2435 main.go:141] libmachine: Attempt 28
	I1216 11:47:10.377604    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:10.377638    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:10.377645    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:10.377649    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:10.377655    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:10.377661    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:12.379675    2435 main.go:141] libmachine: Attempt 29
	I1216 11:47:12.379682    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:12.379717    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:12.379725    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:12.379731    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:12.379735    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:12.379740    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:14.381785    2435 main.go:141] libmachine: Attempt 30
	I1216 11:47:14.381808    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:14.381880    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:14.381893    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:14.381900    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:14.381906    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:14.381911    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:16.383934    2435 main.go:141] libmachine: Attempt 31
	I1216 11:47:16.383946    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:16.383982    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:16.383989    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:16.383994    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:16.384002    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:16.384021    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:18.386056    2435 main.go:141] libmachine: Attempt 32
	I1216 11:47:18.386063    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:18.386106    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:18.386113    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:18.386117    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:18.386122    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:18.386127    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:20.388155    2435 main.go:141] libmachine: Attempt 33
	I1216 11:47:20.388170    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:20.388209    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:20.388216    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:20.388225    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:20.388232    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:20.388238    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:22.390253    2435 main.go:141] libmachine: Attempt 34
	I1216 11:47:22.390270    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:22.390309    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:22.390317    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:22.390323    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:22.390332    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:22.390338    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:24.392365    2435 main.go:141] libmachine: Attempt 35
	I1216 11:47:24.392372    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:24.392410    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:24.392417    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:24.392422    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:24.392428    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:24.392438    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:26.394451    2435 main.go:141] libmachine: Attempt 36
	I1216 11:47:26.394461    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:26.394509    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:26.394517    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:26.394531    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:26.394536    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:26.394542    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:28.396554    2435 main.go:141] libmachine: Attempt 37
	I1216 11:47:28.396568    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:28.396605    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:28.396610    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:28.396616    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:28.396620    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:28.396625    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:30.398652    2435 main.go:141] libmachine: Attempt 38
	I1216 11:47:30.398668    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:30.398710    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:30.398717    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:30.398723    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:30.398728    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:30.398741    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:32.400754    2435 main.go:141] libmachine: Attempt 39
	I1216 11:47:32.400763    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:32.400800    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:32.400808    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:32.400814    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:32.400819    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:32.400825    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:34.402871    2435 main.go:141] libmachine: Attempt 40
	I1216 11:47:34.402907    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:34.402999    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:34.403012    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:34.403019    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:34.403024    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:34.403030    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:36.405057    2435 main.go:141] libmachine: Attempt 41
	I1216 11:47:36.405070    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:36.405120    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:36.405127    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:36.405133    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:36.405138    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:36.405144    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:38.457806    2435 main.go:141] libmachine: Attempt 42
	I1216 11:47:38.457835    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:38.457932    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:38.457946    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:38.457951    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:38.457956    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:38.457960    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:40.460005    2435 main.go:141] libmachine: Attempt 43
	I1216 11:47:40.460017    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:40.460058    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:40.460065    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:40.460075    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:40.460087    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:40.460094    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:42.462150    2435 main.go:141] libmachine: Attempt 44
	I1216 11:47:42.462166    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:42.462207    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:42.462217    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:42.462222    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:42.462231    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:42.462236    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:44.464287    2435 main.go:141] libmachine: Attempt 45
	I1216 11:47:44.464303    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:44.464363    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:44.464369    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:44.464375    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:44.464380    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:44.464387    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:46.466419    2435 main.go:141] libmachine: Attempt 46
	I1216 11:47:46.466430    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:46.466469    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:46.466478    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:46.466483    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:46.466496    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:46.466502    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:48.468527    2435 main.go:141] libmachine: Attempt 47
	I1216 11:47:48.468543    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:48.468584    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:48.468591    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:48.468597    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:48.468603    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:48.468607    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:50.470684    2435 main.go:141] libmachine: Attempt 48
	I1216 11:47:50.470707    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:50.470806    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:50.470822    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:50.470830    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:50.470835    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:50.470842    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:52.472879    2435 main.go:141] libmachine: Attempt 49
	I1216 11:47:52.472888    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:52.472934    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:52.472944    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:52.472950    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:52.472956    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:52.472962    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:54.474995    2435 main.go:141] libmachine: Attempt 50
	I1216 11:47:54.475004    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:54.475052    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:54.475059    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:54.475069    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:54.475075    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:54.475079    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:56.477129    2435 main.go:141] libmachine: Attempt 51
	I1216 11:47:56.477147    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:56.477206    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:56.477215    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:56.477220    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:56.477226    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:56.477232    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:47:58.479279    2435 main.go:141] libmachine: Attempt 52
	I1216 11:47:58.479313    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:47:58.479357    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:47:58.479367    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:47:58.479373    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:47:58.479378    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:47:58.479382    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:00.481422    2435 main.go:141] libmachine: Attempt 53
	I1216 11:48:00.481437    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:00.481490    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:00.481501    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:00.481507    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:00.481512    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:00.481517    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:02.483563    2435 main.go:141] libmachine: Attempt 54
	I1216 11:48:02.483572    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:02.483625    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:02.483632    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:02.483637    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:02.483642    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:02.483647    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:04.485684    2435 main.go:141] libmachine: Attempt 55
	I1216 11:48:04.485695    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:04.485753    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:04.485760    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:04.485767    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:04.485772    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:04.485777    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:06.487802    2435 main.go:141] libmachine: Attempt 56
	I1216 11:48:06.487812    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:06.487844    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:06.487850    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:06.487854    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:06.487858    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:06.487863    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:08.489906    2435 main.go:141] libmachine: Attempt 57
	I1216 11:48:08.489920    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:08.489979    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:08.489989    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:08.489993    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:08.489998    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:08.490003    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:10.492036    2435 main.go:141] libmachine: Attempt 58
	I1216 11:48:10.492044    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:10.492082    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:10.492089    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:10.492094    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:10.492102    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:10.492107    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:12.494134    2435 main.go:141] libmachine: Attempt 59
	I1216 11:48:12.494141    2435 main.go:141] libmachine: Searching for 2e:25:9c:0d:eb:1c in /var/db/dhcpd_leases ...
	I1216 11:48:12.494179    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:12.494188    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:12.494193    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:12.494198    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:12.494208    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:14.501081    2435 out.go:177] * Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:
	
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd 
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd 
	
	
	I1216 11:48:14.505048    2435 firewall.go:74] testing: [sudo -n /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd]
	I1216 11:48:14.524692    2435 firewall.go:82] running: [sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd]
	I1216 11:48:14.541472    2435 firewall.go:74] testing: [sudo -n /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd]
	I1216 11:48:14.556559    2435 firewall.go:82] running: [sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd]
	I1216 11:48:14.574971    2435 out.go:177] * Successfully unblocked bootpd process from firewall, retrying
	I1216 11:48:14.578059    2435 client.go:171] duration metric: took 2m0.656699291s to LocalClient.Create
	I1216 11:48:16.580187    2435 start.go:128] duration metric: took 2m2.69149975s to createHost
	I1216 11:48:16.580232    2435 start.go:83] releasing machines lock for "ha-922000", held for 2m2.691639958s
	W1216 11:48:16.580246    2435 start.go:714] error starting host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 2e:25:9c:0d:eb:1c
	I1216 11:48:16.584567    2435 out.go:177] * Deleting "ha-922000" in qemu2 ...
	W1216 11:48:16.598206    2435 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 2e:25:9c:0d:eb:1c
	! StartHost failed, but will try again: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 2e:25:9c:0d:eb:1c
	I1216 11:48:16.598216    2435 start.go:729] Will try again in 5 seconds ...
	I1216 11:48:21.600462    2435 start.go:360] acquireMachinesLock for ha-922000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:48:21.601096    2435 start.go:364] duration metric: took 523.166µs to acquireMachinesLock for "ha-922000"
	I1216 11:48:21.601248    2435 start.go:93] Provisioning new machine with config: &{Name:ha-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.32.0 ClusterName:ha-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 11:48:21.601536    2435 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 11:48:21.618285    2435 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 11:48:21.668805    2435 start.go:159] libmachine.API.Create for "ha-922000" (driver="qemu2")
	I1216 11:48:21.668843    2435 client.go:168] LocalClient.Create starting
	I1216 11:48:21.668997    2435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 11:48:21.669087    2435 main.go:141] libmachine: Decoding PEM data...
	I1216 11:48:21.669105    2435 main.go:141] libmachine: Parsing certificate...
	I1216 11:48:21.669185    2435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 11:48:21.669261    2435 main.go:141] libmachine: Decoding PEM data...
	I1216 11:48:21.669277    2435 main.go:141] libmachine: Parsing certificate...
	I1216 11:48:21.669887    2435 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 11:48:21.843947    2435 main.go:141] libmachine: Creating SSH key...
	I1216 11:48:21.917719    2435 main.go:141] libmachine: Creating Disk image...
	I1216 11:48:21.917727    2435 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 11:48:21.917943    2435 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:48:21.927750    2435 main.go:141] libmachine: STDOUT: 
	I1216 11:48:21.927765    2435 main.go:141] libmachine: STDERR: 
	I1216 11:48:21.927840    2435 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2 +20000M
	I1216 11:48:21.936274    2435 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 11:48:21.936291    2435 main.go:141] libmachine: STDERR: 
	I1216 11:48:21.936303    2435 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:48:21.936319    2435 main.go:141] libmachine: Starting QEMU VM...
	I1216 11:48:21.936329    2435 qemu.go:418] Using hvf for hardware acceleration
	I1216 11:48:21.936364    2435 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:aa:33:50:bd:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/disk.qcow2
	I1216 11:48:21.973321    2435 main.go:141] libmachine: STDOUT: 
	I1216 11:48:21.973343    2435 main.go:141] libmachine: STDERR: 
	I1216 11:48:21.973347    2435 main.go:141] libmachine: Attempt 0
	I1216 11:48:21.973372    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:21.973520    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:21.973534    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:21.973540    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:21.973546    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:21.973554    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:23.975808    2435 main.go:141] libmachine: Attempt 1
	I1216 11:48:23.976022    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:23.976418    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:23.976472    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:23.976527    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:23.976558    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:23.976592    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:25.978798    2435 main.go:141] libmachine: Attempt 2
	I1216 11:48:25.978893    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:25.979279    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:25.979332    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:25.979364    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:25.979396    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:25.979424    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:27.981577    2435 main.go:141] libmachine: Attempt 3
	I1216 11:48:27.981610    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:27.981715    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:27.981729    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:27.981735    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:27.981739    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:27.981746    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:29.983787    2435 main.go:141] libmachine: Attempt 4
	I1216 11:48:29.983814    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:29.983859    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:29.983866    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:29.983877    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:29.983883    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:29.983890    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:31.985956    2435 main.go:141] libmachine: Attempt 5
	I1216 11:48:31.985987    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:31.986092    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:31.986111    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:31.986117    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:31.986122    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:31.986128    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:33.988194    2435 main.go:141] libmachine: Attempt 6
	I1216 11:48:33.988214    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:33.988295    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:33.988305    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:33.988310    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:33.988314    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:33.988320    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:35.990380    2435 main.go:141] libmachine: Attempt 7
	I1216 11:48:35.990391    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:35.990485    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:35.990493    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:35.990499    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:35.990504    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:35.990509    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:37.992571    2435 main.go:141] libmachine: Attempt 8
	I1216 11:48:37.992595    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:37.992658    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:37.992674    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:37.992682    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:37.992687    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:37.992692    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:39.994739    2435 main.go:141] libmachine: Attempt 9
	I1216 11:48:39.994747    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:39.994793    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:39.994804    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:39.994809    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:39.994814    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:39.994820    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:41.996849    2435 main.go:141] libmachine: Attempt 10
	I1216 11:48:41.996865    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:41.996921    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:41.996928    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:41.996933    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:41.996938    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:41.996943    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:43.998969    2435 main.go:141] libmachine: Attempt 11
	I1216 11:48:43.998978    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:43.999018    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:43.999028    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:43.999033    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:43.999040    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:43.999050    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:46.001083    2435 main.go:141] libmachine: Attempt 12
	I1216 11:48:46.001091    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:46.001134    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:46.001141    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:46.001145    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:46.001151    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:46.001156    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:48.003181    2435 main.go:141] libmachine: Attempt 13
	I1216 11:48:48.003196    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:48.003233    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:48.003239    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:48.003245    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:48.003254    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:48.003260    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:50.003346    2435 main.go:141] libmachine: Attempt 14
	I1216 11:48:50.003354    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:50.003400    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:50.003406    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:50.003411    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:50.003425    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:50.003429    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:52.005455    2435 main.go:141] libmachine: Attempt 15
	I1216 11:48:52.005473    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:52.005516    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:52.005523    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:52.005529    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:52.005534    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:52.005539    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:54.007566    2435 main.go:141] libmachine: Attempt 16
	I1216 11:48:54.007574    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:54.007624    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:54.007644    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:54.007649    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:54.007656    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:54.007661    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:56.009750    2435 main.go:141] libmachine: Attempt 17
	I1216 11:48:56.009790    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:56.009859    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:56.009871    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:56.009876    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:56.009880    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:56.009885    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:48:58.011932    2435 main.go:141] libmachine: Attempt 18
	I1216 11:48:58.011945    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:48:58.011986    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:48:58.012005    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:48:58.012012    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:48:58.012017    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:48:58.012025    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:00.014049    2435 main.go:141] libmachine: Attempt 19
	I1216 11:49:00.014060    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:00.014101    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:00.014106    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:00.014111    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:00.014116    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:00.014121    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:02.016155    2435 main.go:141] libmachine: Attempt 20
	I1216 11:49:02.016185    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:02.016240    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:02.016248    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:02.016263    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:02.016269    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:02.016273    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:04.018299    2435 main.go:141] libmachine: Attempt 21
	I1216 11:49:04.018309    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:04.018353    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:04.018361    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:04.018367    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:04.018373    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:04.018377    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:06.020423    2435 main.go:141] libmachine: Attempt 22
	I1216 11:49:06.020431    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:06.020494    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:06.020502    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:06.020510    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:06.020514    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:06.020519    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:08.022542    2435 main.go:141] libmachine: Attempt 23
	I1216 11:49:08.022551    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:08.022583    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:08.022591    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:08.022600    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:08.022605    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:08.022610    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:10.024650    2435 main.go:141] libmachine: Attempt 24
	I1216 11:49:10.024664    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:10.024706    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:10.024712    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:10.024717    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:10.024723    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:10.024729    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:12.026780    2435 main.go:141] libmachine: Attempt 25
	I1216 11:49:12.026794    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:12.026829    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:12.026837    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:12.026841    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:12.026847    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:12.026852    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:14.028913    2435 main.go:141] libmachine: Attempt 26
	I1216 11:49:14.028944    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:14.029009    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:14.029023    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:14.029029    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:14.029034    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:14.029047    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:16.031114    2435 main.go:141] libmachine: Attempt 27
	I1216 11:49:16.031132    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:16.031209    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:16.031221    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:16.031226    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:16.031231    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:16.031237    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:18.033270    2435 main.go:141] libmachine: Attempt 28
	I1216 11:49:18.033301    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:18.033337    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:18.033345    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:18.033355    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:18.033361    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:18.033366    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:20.035419    2435 main.go:141] libmachine: Attempt 29
	I1216 11:49:20.035427    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:20.035470    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:20.035479    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:20.035484    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:20.035489    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:20.035495    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:22.037519    2435 main.go:141] libmachine: Attempt 30
	I1216 11:49:22.037534    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:22.037575    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:22.037581    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:22.037587    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:22.037592    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:22.037596    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:24.039624    2435 main.go:141] libmachine: Attempt 31
	I1216 11:49:24.039643    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:24.039680    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:24.039686    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:24.039691    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:24.039696    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:24.039702    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:26.041724    2435 main.go:141] libmachine: Attempt 32
	I1216 11:49:26.041734    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:26.041768    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:26.041775    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:26.041779    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:26.041784    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:26.041789    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:28.041992    2435 main.go:141] libmachine: Attempt 33
	I1216 11:49:28.042001    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:28.042036    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:28.042042    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:28.042046    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:28.042051    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:28.042055    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:30.044079    2435 main.go:141] libmachine: Attempt 34
	I1216 11:49:30.044094    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:30.044129    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:30.044157    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:30.044166    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:30.044170    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:30.044175    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:32.046199    2435 main.go:141] libmachine: Attempt 35
	I1216 11:49:32.046209    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:32.046256    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:32.046263    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:32.046279    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:32.046284    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:32.046289    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:34.048313    2435 main.go:141] libmachine: Attempt 36
	I1216 11:49:34.048324    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:34.048367    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:34.048373    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:34.048379    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:34.048384    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:34.048389    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:36.050342    2435 main.go:141] libmachine: Attempt 37
	I1216 11:49:36.050350    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:36.050387    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:36.050394    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:36.050401    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:36.050406    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:36.050411    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:38.052499    2435 main.go:141] libmachine: Attempt 38
	I1216 11:49:38.052532    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:38.052606    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:38.052618    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:38.052624    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:38.052629    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:38.052635    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:40.054668    2435 main.go:141] libmachine: Attempt 39
	I1216 11:49:40.054695    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:40.054744    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:40.054751    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:40.054756    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:40.054760    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:40.054766    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:42.056813    2435 main.go:141] libmachine: Attempt 40
	I1216 11:49:42.056823    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:42.056857    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:42.056867    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:42.056873    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:42.056879    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:42.056885    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:44.058909    2435 main.go:141] libmachine: Attempt 41
	I1216 11:49:44.058919    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:44.058958    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:44.058965    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:44.058971    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:44.058977    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:44.058981    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:46.061015    2435 main.go:141] libmachine: Attempt 42
	I1216 11:49:46.061032    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:46.061075    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:46.061083    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:46.061094    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:46.061100    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:46.061106    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:48.063136    2435 main.go:141] libmachine: Attempt 43
	I1216 11:49:48.063144    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:48.063181    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:48.063196    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:48.063204    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:48.063211    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:48.063220    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:50.065246    2435 main.go:141] libmachine: Attempt 44
	I1216 11:49:50.065259    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:50.065298    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:50.065306    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:50.065314    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:50.065321    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:50.065325    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:52.067352    2435 main.go:141] libmachine: Attempt 45
	I1216 11:49:52.067363    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:52.067396    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:52.067402    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:52.067407    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:52.067412    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:52.067417    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:54.069463    2435 main.go:141] libmachine: Attempt 46
	I1216 11:49:54.069485    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:54.069549    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:54.069562    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:54.069568    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:54.069574    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:54.069580    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:56.071623    2435 main.go:141] libmachine: Attempt 47
	I1216 11:49:56.071655    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:56.071701    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:56.071709    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:56.071716    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:56.071722    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:56.071728    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:49:58.073767    2435 main.go:141] libmachine: Attempt 48
	I1216 11:49:58.073777    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:49:58.073825    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:49:58.073834    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:49:58.073838    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:49:58.073844    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:49:58.073849    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:00.075920    2435 main.go:141] libmachine: Attempt 49
	I1216 11:50:00.075964    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:00.076045    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:00.076056    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:00.076062    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:00.076068    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:00.076072    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:02.078145    2435 main.go:141] libmachine: Attempt 50
	I1216 11:50:02.078200    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:02.078282    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:02.078296    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:02.078302    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:02.078308    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:02.078315    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:04.080367    2435 main.go:141] libmachine: Attempt 51
	I1216 11:50:04.080392    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:04.080469    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:04.080483    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:04.080490    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:04.080495    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:04.080502    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:06.082565    2435 main.go:141] libmachine: Attempt 52
	I1216 11:50:06.082625    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:06.082693    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:06.082707    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:06.082713    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:06.082718    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:06.082724    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:08.084759    2435 main.go:141] libmachine: Attempt 53
	I1216 11:50:08.084773    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:08.084814    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:08.084820    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:08.084827    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:08.084832    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:08.084840    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:10.086892    2435 main.go:141] libmachine: Attempt 54
	I1216 11:50:10.086915    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:10.086997    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:10.087011    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:10.087016    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:10.087021    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:10.087029    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:12.089067    2435 main.go:141] libmachine: Attempt 55
	I1216 11:50:12.089085    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:12.089132    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:12.089139    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:12.089143    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:12.089149    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:12.089155    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:14.091216    2435 main.go:141] libmachine: Attempt 56
	I1216 11:50:14.091307    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:14.091402    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:14.091417    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:14.091425    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:14.091430    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:14.091435    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:16.093480    2435 main.go:141] libmachine: Attempt 57
	I1216 11:50:16.093496    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:16.093545    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:16.093554    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:16.093562    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:16.093567    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:16.093573    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:18.095652    2435 main.go:141] libmachine: Attempt 58
	I1216 11:50:18.095698    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:18.095761    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:18.095774    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:18.095780    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:18.095785    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:18.095790    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:20.097855    2435 main.go:141] libmachine: Attempt 59
	I1216 11:50:20.097868    2435 main.go:141] libmachine: Searching for 8a:aa:33:50:bd:cb in /var/db/dhcpd_leases ...
	I1216 11:50:20.097925    2435 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1216 11:50:20.097935    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:f6:9c:f1:aa:a6:52 ID:1,f6:9c:f1:aa:a6:52 Lease:0x676090de}
	I1216 11:50:20.097940    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:36:2c:62:de:22:c0 ID:1,36:2c:62:de:22:c0 Lease:0x6760828c}
	I1216 11:50:20.097950    2435 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:96:da:3c:13:fb:64 ID:1,96:da:3c:13:fb:64 Lease:0x67608259}
	I1216 11:50:20.097956    2435 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67608dfa}
	I1216 11:50:22.105288    2435 out.go:177] * Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:
	
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd 
	    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd 
	
	
	I1216 11:50:22.108320    2435 firewall.go:74] testing: [sudo -n /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd]
	I1216 11:50:22.125873    2435 firewall.go:82] running: [sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd]
	I1216 11:50:22.143069    2435 firewall.go:74] testing: [sudo -n /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd]
	I1216 11:50:22.158276    2435 firewall.go:82] running: [sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd]
	I1216 11:50:22.177137    2435 out.go:177] * Successfully unblocked bootpd process from firewall, retrying
	I1216 11:50:22.181201    2435 client.go:171] duration metric: took 2m0.511973625s to LocalClient.Create
	I1216 11:50:24.183252    2435 start.go:128] duration metric: took 2m2.58129425s to createHost
	I1216 11:50:24.183266    2435 start.go:83] releasing machines lock for "ha-922000", held for 2m2.581766667s
	W1216 11:50:24.183387    2435 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-922000" may fix it: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 8a:aa:33:50:bd:cb
	* Failed to start qemu2 VM. Running "minikube delete -p ha-922000" may fix it: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 8a:aa:33:50:bd:cb
	I1216 11:50:24.191616    2435 out.go:201] 
	W1216 11:50:24.194596    2435 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 8a:aa:33:50:bd:cb
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 8a:aa:33:50:bd:cb
	W1216 11:50:24.194602    2435 out.go:270] * 
	* 
	W1216 11:50:24.195105    2435 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:50:24.206601    2435 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-922000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (38.829417ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:50:24.268329    2534 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:50:24.268338    2534 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (250.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (74.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (67.576375ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-922000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- rollout status deployment/busybox: exit status 1 (66.826208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.128916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:24.472067    1494 retry.go:31] will retry after 724.321433ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.274458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:25.266832    1494 retry.go:31] will retry after 2.220083473s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (69.2995ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:27.558394    1494 retry.go:31] will retry after 2.62752723s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1216 11:50:28.869711    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (67.862792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:30.255969    1494 retry.go:31] will retry after 2.844567865s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.971ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:33.171776    1494 retry.go:31] will retry after 6.687355638s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1216 11:50:39.113172    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.3235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:39.929710    1494 retry.go:31] will retry after 6.609168869s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (69.049958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:50:46.610124    1494 retry.go:31] will retry after 16.72956654s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1216 11:50:59.596666    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.3165ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:51:03.410363    1494 retry.go:31] will retry after 8.907930003s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (66.96125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 11:51:12.387481    1494 retry.go:31] will retry after 26.293841729s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (68.316625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (67.008125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.io: exit status 1 (67.261959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.default: exit status 1 (67.298ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (67.365ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.998417ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.058412    2606 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.058418    2606 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (74.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-922000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (65.989125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-922000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.569958ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.162193    2611 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.162202    2611 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-922000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-922000 -v=7 --alsologtostderr: exit status 50 (52.319416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:51:39.198936    2613 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:51:39.199180    2613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.199184    2613 out.go:358] Setting ErrFile to fd 2...
	I1216 11:51:39.199186    2613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.199324    2613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:51:39.199573    2613 mustload.go:65] Loading cluster: ha-922000
	I1216 11:51:39.199806    2613 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:51:39.200499    2613 host.go:66] Checking if "ha-922000" exists ...
	I1216 11:51:39.204015    2613 out.go:201] 
	W1216 11:51:39.207811    2613 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-922000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-922000 endpoint: failed to lookup ip for ""
	W1216 11:51:39.207834    2613 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1216 11:51:39.211890    2613 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-922000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.932458ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.252814    2615 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.252821    2615 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-922000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-922000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.871292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-922000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-922000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-922000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (38.274083ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.320248    2618 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.320259    2618 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-922000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922000\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-922000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":84
43,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"ha-922000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.0\",\"ContainerRuntime\"
:\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID
\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-922000" in json of 'profile list' to have "HAppy" status but have "" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922000\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-922000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"ha-922000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.0\",\"ContainerRun
time\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAg
entPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (38.308917ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.413635    2623 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.413641    2623 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-922000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-922000 node stop m02 -v=7 --alsologtostderr: exit status 85 (53.250291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:51:39.487993    2627 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:51:39.488290    2627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.488294    2627 out.go:358] Setting ErrFile to fd 2...
	I1216 11:51:39.488296    2627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.488425    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:51:39.488698    2627 mustload.go:65] Loading cluster: ha-922000
	I1216 11:51:39.488904    2627 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:51:39.493107    2627 out.go:201] 
	W1216 11:51:39.496119    2627 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1216 11:51:39.496124    2627 out.go:270] * 
	* 
	W1216 11:51:39.497571    2627 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:51:39.500940    2627 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-922000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (36.639625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.578742    2631 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.578753    2631 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-922000" in json of 'profile list' to have "Degraded" status but have "" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922000\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-922000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"ha-922000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.0\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.786375ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.673009    2636 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.673023    2636 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-922000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-922000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.816083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:51:39.709179    2638 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:51:39.709437    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.709441    2638 out.go:358] Setting ErrFile to fd 2...
	I1216 11:51:39.709443    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:39.709608    2638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:51:39.709843    2638 mustload.go:65] Loading cluster: ha-922000
	I1216 11:51:39.710054    2638 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:51:39.713214    2638 out.go:201] 
	W1216 11:51:39.716066    2638 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1216 11:51:39.716071    2638 out.go:270] * 
	* 
	W1216 11:51:39.717498    2638 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:51:39.721067    2638 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1216 11:51:39.709179    2638 out.go:345] Setting OutFile to fd 1 ...
I1216 11:51:39.709437    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:51:39.709441    2638 out.go:358] Setting ErrFile to fd 2...
I1216 11:51:39.709443    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:51:39.709608    2638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:51:39.709843    2638 mustload.go:65] Loading cluster: ha-922000
I1216 11:51:39.710054    2638 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:51:39.713214    2638 out.go:201] 
W1216 11:51:39.716066    2638 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1216 11:51:39.716071    2638 out.go:270] * 
* 
W1216 11:51:39.717498    2638 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 11:51:39.721067    2638 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-922000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-922000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (35.521917ms)

                                                
                                                
** stderr ** 
	E1216 11:51:39.794566    2642 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1216 11:51:39.795021    2642 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1216 11:51:39.796133    2642 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1216 11:51:39.796403    2642 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1216 11:51:39.797606    2642 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.435209ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.834154    2643 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.834166    2643 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-922000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922000\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-922000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":84
43,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"ha-922000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.0\",\"ContainerRuntime\"
:\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID
\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-922000" in json of 'profile list' to have "HAppy" status but have "" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-922000\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-922000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"ha-922000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.32.0\",\"ContainerRun
time\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAg
entPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (37.843333ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:51:39.930781    2648 status.go:393] failed to get driver ip: parsing IP: 
	E1216 11:51:39.930789    2648 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (1473.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-922000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-922000 -v=7 --alsologtostderr
E1216 11:51:40.560428    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:02.484152    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:28.328990    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p ha-922000 -v=7 --alsologtostderr: exit status 82 (3m2.060205541s)

                                                
                                                
-- stdout --
	* Stopping node "ha-922000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:51:40.003937    2652 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:51:40.004341    2652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:40.004348    2652 out.go:358] Setting ErrFile to fd 2...
	I1216 11:51:40.004350    2652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:51:40.004502    2652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:51:40.004706    2652 out.go:352] Setting JSON to false
	I1216 11:51:40.004800    2652 mustload.go:65] Loading cluster: ha-922000
	I1216 11:51:40.004981    2652 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:51:40.005019    2652 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/ha-922000/config.json ...
	I1216 11:51:40.005275    2652 mustload.go:65] Loading cluster: ha-922000
	I1216 11:51:40.005342    2652 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:51:40.005359    2652 stop.go:39] StopHost: ha-922000
	I1216 11:51:40.010695    2652 out.go:177] * Stopping node "ha-922000"  ...
	I1216 11:51:40.017628    2652 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1216 11:51:40.017666    2652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1216 11:51:40.017674    2652 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 11:51:40.071592    2652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:51:40.071617    2652 retry.go:31] will retry after 161.487318ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 11:51:40.295072    2652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:51:40.295085    2652 retry.go:31] will retry after 475.479993ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 11:51:40.832981    2652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:51:40.832995    2652 retry.go:31] will retry after 281.392481ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 11:51:41.178034    2652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:51:41.178048    2652 retry.go:31] will retry after 607.520194ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 11:51:41.847326    2652 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 11:51:41.847365    2652 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:51:41.847407    2652 main.go:141] libmachine: Stopping "ha-922000"...
	I1216 11:54:41.998785    2652 stop.go:66] stop err: Maximum number of retries (60) exceeded
	W1216 11:54:41.998818    2652 stop.go:165] stop host returned error: Temporary Error: stop: Maximum number of retries (60) exceeded
	I1216 11:54:42.003253    2652 out.go:201] 
	W1216 11:54:42.006254    2652 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: Maximum number of retries (60) exceeded
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: Maximum number of retries (60) exceeded
	W1216 11:54:42.006260    2652 out.go:270] * 
	* 
	W1216 11:54:42.007778    2652 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:54:42.021154    2652 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-darwin-arm64 node list -p ha-922000 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-922000 --wait=true -v=7 --alsologtostderr
E1216 11:55:18.599596    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:55:46.327985    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:58:28.330031    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:59:51.422303    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:00:18.600473    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:03:28.330913    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:05:18.589612    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:06:41.678384    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:08:28.316288    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:10:18.586477    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:13:28.316246    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:18.586352    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-922000 --wait=true -v=7 --alsologtostderr: signal: killed (21m31.73115575s)

                                                
                                                
-- stdout --
	* [ha-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-922000" primary control-plane node in "ha-922000" cluster
	* Updating the running qemu2 "ha-922000" VM ...
	* Updating the running qemu2 "ha-922000" VM ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:54:42.065670    2707 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:54:42.065820    2707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:54:42.065823    2707 out.go:358] Setting ErrFile to fd 2...
	I1216 11:54:42.065826    2707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:54:42.065971    2707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:54:42.067135    2707 out.go:352] Setting JSON to false
	I1216 11:54:42.085691    2707 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1453,"bootTime":1734377429,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:54:42.085770    2707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:54:42.090164    2707 out.go:177] * [ha-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:54:42.096167    2707 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 11:54:42.096253    2707 notify.go:220] Checking for updates...
	I1216 11:54:42.103181    2707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:54:42.106186    2707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:54:42.109179    2707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:54:42.112226    2707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 11:54:42.115166    2707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:54:42.118437    2707 config.go:182] Loaded profile config "ha-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:54:42.118495    2707 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:54:42.122153    2707 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 11:54:42.129137    2707 start.go:297] selected driver: qemu2
	I1216 11:54:42.129144    2707 start.go:901] validating driver "qemu2" against &{Name:ha-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.32.0 ClusterName:ha-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:54:42.129191    2707 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:54:42.131858    2707 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:54:42.131883    2707 cni.go:84] Creating CNI manager for ""
	I1216 11:54:42.131906    2707 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 11:54:42.131953    2707 start.go:340] cluster config:
	{Name:ha-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-922000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:54:42.136670    2707 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:54:42.140193    2707 out.go:177] * Starting "ha-922000" primary control-plane node in "ha-922000" cluster
	I1216 11:54:42.148170    2707 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 11:54:42.148187    2707 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 11:54:42.148194    2707 cache.go:56] Caching tarball of preloaded images
	I1216 11:54:42.148267    2707 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 11:54:42.148273    2707 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 11:54:42.148319    2707 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/ha-922000/config.json ...
	I1216 11:54:42.148679    2707 start.go:360] acquireMachinesLock for ha-922000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:54:42.148725    2707 start.go:364] duration metric: took 39.667µs to acquireMachinesLock for "ha-922000"
	I1216 11:54:42.148733    2707 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:54:42.148738    2707 fix.go:54] fixHost starting: 
	I1216 11:54:42.149290    2707 fix.go:112] recreateIfNeeded on ha-922000: state=Running err=<nil>
	W1216 11:54:42.149297    2707 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:54:42.152222    2707 out.go:177] * Updating the running qemu2 "ha-922000" VM ...
	I1216 11:54:42.160191    2707 machine.go:93] provisionDockerMachine start ...
	I1216 11:54:42.160236    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:42.160351    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 11:54:42.160355    2707 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:54:42.217976    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:54:45.277781    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:54:48.339316    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:54:51.401060    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:54:54.457695    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:54:57.512368    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:00.572421    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:03.635291    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:06.696432    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:09.759001    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:12.819092    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:15.880382    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:18.942044    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:22.001513    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:25.065116    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:28.125405    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:31.186244    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:34.246933    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:37.307591    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:40.368728    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:43.425358    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:46.482878    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:49.540123    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:52.602744    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:55.664912    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:55:58.728375    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:01.790325    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:04.852684    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:07.912399    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:10.970939    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:14.028776    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:17.088194    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:20.150448    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:23.213221    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:26.274547    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:29.336038    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:32.396451    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:35.455889    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:38.509451    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:41.571043    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:44.629785    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:47.689566    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:50.750795    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:53.813552    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:56.873015    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:56:59.936147    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:02.997770    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:06.059446    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:09.118616    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:12.178795    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:15.238628    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:18.299567    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:21.361386    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:24.421522    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:27.480474    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:30.538949    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:33.603036    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:36.665436    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:39.725141    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:42.787950    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:45.789933    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:57:45.789958    2707 buildroot.go:166] provisioning hostname "ha-922000"
	I1216 11:57:45.790031    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 11:57:45.790195    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 11:57:45.790201    2707 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-922000 && echo "ha-922000" | sudo tee /etc/hostname
	I1216 11:57:45.846511    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:48.906772    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:51.966342    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:55.025641    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:57:58.086194    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:01.149517    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:04.211394    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:07.270267    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:10.330330    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:13.390240    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:16.441375    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:19.499729    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:22.561233    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:25.620564    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:28.681051    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:31.740813    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:34.800574    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:37.860305    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:40.920887    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:43.975702    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:47.039771    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:50.100193    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:53.165357    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:56.227311    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:58:59.290091    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:02.351584    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:05.413621    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:08.473111    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:11.533581    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:14.592734    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:17.657674    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:20.717071    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:23.778703    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:26.839170    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:29.899493    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:32.960761    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:36.017820    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:39.079839    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:42.140161    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:45.203118    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:48.263800    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:51.326978    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:54.388276    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 11:59:57.447072    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:00.500482    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:03.553606    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:06.613147    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:09.674743    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:12.737452    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:15.798241    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:18.858956    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:21.919900    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:24.980736    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:28.041671    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:31.100346    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:34.159029    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:37.219455    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:40.278116    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:43.339374    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:46.394454    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:49.396537    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:00:49.396654    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 12:00:49.396809    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 12:00:49.396820    2707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 12:00:49.454555    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:52.517385    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:55.578923    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:00:58.639634    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:01.700999    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:04.759085    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:07.818018    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:10.877996    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:13.937722    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:16.999576    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:20.060890    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:23.120868    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:26.179774    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:29.240019    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:32.301473    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:35.362799    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:38.422667    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:41.484693    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:44.544582    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:47.599360    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:50.658530    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:53.719541    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:56.777973    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:01:59.837147    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:02.898412    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:05.960602    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:09.020148    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:12.079467    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:15.139931    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:18.200626    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:21.259808    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:24.320183    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:27.381946    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:30.441727    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:33.502049    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:36.562592    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:39.621683    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:42.683658    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:45.741697    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:48.804510    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:51.866777    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:54.926448    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:02:57.987160    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:01.048178    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:04.108616    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:07.167212    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:10.228986    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:13.289449    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:16.349012    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:19.408249    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:22.467845    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:25.529718    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:28.589972    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:31.650586    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:34.711693    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:37.773383    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:40.834020    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:43.894036    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:46.947672    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:50.006384    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:53.008439    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:03:53.008454    2707 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20091-990/.minikube CaCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20091-990/.minikube}
	I1216 12:03:53.008477    2707 buildroot.go:174] setting up certificates
	I1216 12:03:53.008485    2707 provision.go:84] configureAuth start
	I1216 12:03:53.008492    2707 provision.go:143] copyHostCerts
	I1216 12:03:53.008533    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:53.008599    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:03:53.008607    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:53.008738    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:03:53.008926    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:53.008957    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:03:53.008960    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:53.009021    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:03:53.009126    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:53.009156    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:03:53.009159    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:53.009220    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:03:53.009324    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:03:53.069742    2707 provision.go:177] copyRemoteCerts
	I1216 12:03:53.069792    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:03:53.069804    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:03:53.128297    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:53.128321    2707 retry.go:31] will retry after 149.652231ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:53.337114    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:53.337127    2707 retry.go:31] will retry after 406.309979ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:53.804235    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:53.804247    2707 retry.go:31] will retry after 717.510343ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:54.580867    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:54.580899    2707 retry.go:31] will retry after 324.380031ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:54.907313    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:03:54.965575    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:54.965588    2707 retry.go:31] will retry after 215.443795ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:55.239502    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:55.239515    2707 retry.go:31] will retry after 500.794477ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:55.799893    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:55.799906    2707 retry.go:31] will retry after 569.808943ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:56.431592    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:56.431633    2707 provision.go:87] duration metric: took 3.423127917s to configureAuth
	W1216 12:03:56.431639    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:56.431645    2707 retry.go:31] will retry after 62.192µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:56.431735    2707 provision.go:84] configureAuth start
	I1216 12:03:56.431744    2707 provision.go:143] copyHostCerts
	I1216 12:03:56.431766    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:56.431796    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:03:56.431800    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:56.431879    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:03:56.432035    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:56.432053    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:03:56.432056    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:56.432099    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:03:56.432189    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:56.432207    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:03:56.432209    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:56.432248    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:03:56.432341    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:03:56.569497    2707 provision.go:177] copyRemoteCerts
	I1216 12:03:56.569534    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:03:56.569545    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:03:56.626743    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:56.626754    2707 retry.go:31] will retry after 215.890875ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:56.904402    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:56.904414    2707 retry.go:31] will retry after 460.509097ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:57.424828    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:57.424839    2707 retry.go:31] will retry after 297.298596ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:57.784271    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:57.784305    2707 retry.go:31] will retry after 270.088725ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:58.056417    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:03:58.112291    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:58.112303    2707 retry.go:31] will retry after 240.597162ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:58.414310    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:58.414321    2707 retry.go:31] will retry after 422.591381ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:58.896472    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:58.896483    2707 retry.go:31] will retry after 790.811118ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:03:59.747735    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:59.747773    2707 provision.go:87] duration metric: took 3.316019541s to configureAuth
	W1216 12:03:59.747778    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:59.747785    2707 retry.go:31] will retry after 77.957µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:03:59.747925    2707 provision.go:84] configureAuth start
	I1216 12:03:59.747932    2707 provision.go:143] copyHostCerts
	I1216 12:03:59.747946    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:59.747975    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:03:59.747980    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:03:59.748076    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:03:59.748226    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:59.748245    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:03:59.748248    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:03:59.748291    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:03:59.748384    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:59.748402    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:03:59.748405    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:03:59.748444    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:03:59.748554    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:03:59.957336    2707 provision.go:177] copyRemoteCerts
	I1216 12:03:59.957381    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:03:59.957390    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:00.016979    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:00.016995    2707 retry.go:31] will retry after 323.32516ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:00.399510    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:00.399525    2707 retry.go:31] will retry after 525.676198ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:00.985265    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:00.985277    2707 retry.go:31] will retry after 425.323648ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:01.470378    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:01.470409    2707 retry.go:31] will retry after 176.080777ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:01.647099    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:01.703351    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:01.703365    2707 retry.go:31] will retry after 361.981143ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:02.123787    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:02.123800    2707 retry.go:31] will retry after 460.577028ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:02.645279    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:02.645292    2707 retry.go:31] will retry after 737.158335ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:03.442334    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:03.442371    2707 provision.go:87] duration metric: took 3.694428042s to configureAuth
	W1216 12:04:03.442374    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:03.442380    2707 retry.go:31] will retry after 277.228µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:03.442734    2707 provision.go:84] configureAuth start
	I1216 12:04:03.442742    2707 provision.go:143] copyHostCerts
	I1216 12:04:03.442757    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:03.442784    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:03.442790    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:03.442889    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:03.443051    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:03.443072    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:03.443078    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:03.443121    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:03.443219    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:03.443235    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:03.443238    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:03.443286    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:03.443379    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:03.541029    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:03.541068    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:03.541076    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:03.597082    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:03.597095    2707 retry.go:31] will retry after 161.041761ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:03.819025    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:03.819040    2707 retry.go:31] will retry after 296.487698ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:04.173057    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:04.173070    2707 retry.go:31] will retry after 363.423611ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:04.599565    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:04.599579    2707 retry.go:31] will retry after 704.805981ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:05.366338    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:05.366378    2707 provision.go:87] duration metric: took 1.923631416s to configureAuth
	W1216 12:04:05.366386    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:05.366392    2707 retry.go:31] will retry after 485.534µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:05.367012    2707 provision.go:84] configureAuth start
	I1216 12:04:05.367020    2707 provision.go:143] copyHostCerts
	I1216 12:04:05.367042    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:05.367074    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:05.367078    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:05.367159    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:05.367315    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:05.367332    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:05.367335    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:05.367381    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:05.367483    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:05.367498    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:05.367501    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:05.367537    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:05.367626    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:05.438272    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:05.438300    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:05.438307    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:05.497036    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:05.497048    2707 retry.go:31] will retry after 202.596871ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:05.757533    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:05.757546    2707 retry.go:31] will retry after 450.228972ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:06.266975    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:06.266986    2707 retry.go:31] will retry after 591.977228ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:06.918602    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:06.918635    2707 retry.go:31] will retry after 201.443053ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:07.122115    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:07.177369    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:07.177382    2707 retry.go:31] will retry after 139.895838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:07.375909    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:07.375922    2707 retry.go:31] will retry after 247.979305ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:07.684154    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:07.684166    2707 retry.go:31] will retry after 324.063453ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:08.068079    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.068117    2707 provision.go:87] duration metric: took 2.701089708s to configureAuth
	W1216 12:04:08.068124    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.068129    2707 retry.go:31] will retry after 451.083µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.068706    2707 provision.go:84] configureAuth start
	I1216 12:04:08.068715    2707 provision.go:143] copyHostCerts
	I1216 12:04:08.068729    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:08.068757    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:08.068762    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:08.068864    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:08.069030    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:08.069046    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:08.069049    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:08.069092    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:08.069190    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:08.069206    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:08.069209    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:08.069251    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:08.069346    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:08.186111    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:08.186152    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:08.186162    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:08.244068    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.244079    2707 retry.go:31] will retry after 232.370946ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:08.536939    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.536979    2707 retry.go:31] will retry after 254.026171ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:08.851453    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:08.851465    2707 retry.go:31] will retry after 541.521557ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:09.454320    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:09.454352    2707 retry.go:31] will retry after 214.194969ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:09.670573    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:09.726624    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:09.726635    2707 retry.go:31] will retry after 206.840752ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:09.992393    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:09.992405    2707 retry.go:31] will retry after 441.232859ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:10.494634    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:10.494647    2707 retry.go:31] will retry after 788.354804ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:11.341868    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:11.341899    2707 provision.go:87] duration metric: took 3.273174625s to configureAuth
	W1216 12:04:11.341902    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:11.341906    2707 retry.go:31] will retry after 1.117095ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:11.343315    2707 provision.go:84] configureAuth start
	I1216 12:04:11.343323    2707 provision.go:143] copyHostCerts
	I1216 12:04:11.343346    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:11.343373    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:11.343377    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:11.343462    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:11.343624    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:11.343641    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:11.343643    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:11.343682    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:11.343768    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:11.343783    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:11.343786    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:11.343822    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:11.343913    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:11.495734    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:11.495768    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:11.495777    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:11.554356    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:11.554368    2707 retry.go:31] will retry after 228.121081ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:11.842759    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:11.842772    2707 retry.go:31] will retry after 321.189084ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:12.222429    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:12.222442    2707 retry.go:31] will retry after 364.659463ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:12.646922    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:12.646955    2707 retry.go:31] will retry after 251.249465ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:12.900232    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:12.956520    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:12.956535    2707 retry.go:31] will retry after 207.064113ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:13.224716    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:13.224728    2707 retry.go:31] will retry after 336.350679ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:13.620436    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:13.620448    2707 retry.go:31] will retry after 599.730145ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:14.279479    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:14.279519    2707 provision.go:87] duration metric: took 2.936187833s to configureAuth
	W1216 12:04:14.279522    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:14.279526    2707 retry.go:31] will retry after 1.071285ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:14.280877    2707 provision.go:84] configureAuth start
	I1216 12:04:14.280883    2707 provision.go:143] copyHostCerts
	I1216 12:04:14.280904    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:14.280930    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:14.280934    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:14.281033    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:14.281181    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:14.281198    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:14.281201    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:14.281243    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:14.281335    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:14.281350    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:14.281353    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:14.281389    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:14.281474    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:14.348276    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:14.348307    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:14.348314    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:14.403191    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:14.403202    2707 retry.go:31] will retry after 252.103039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:14.717820    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:14.717834    2707 retry.go:31] will retry after 488.834441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:15.266174    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:15.266186    2707 retry.go:31] will retry after 352.236889ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:15.680463    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:15.680498    2707 retry.go:31] will retry after 291.678715ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:15.974218    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:16.032700    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:16.032712    2707 retry.go:31] will retry after 219.888842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:16.314131    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:16.314143    2707 retry.go:31] will retry after 530.168015ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:16.903959    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:16.903970    2707 retry.go:31] will retry after 589.920981ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:17.552730    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:17.552767    2707 provision.go:87] duration metric: took 3.271875541s to configureAuth
	W1216 12:04:17.552773    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:17.552778    2707 retry.go:31] will retry after 1.492095ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:17.554674    2707 provision.go:84] configureAuth start
	I1216 12:04:17.554682    2707 provision.go:143] copyHostCerts
	I1216 12:04:17.554697    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:17.554725    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:17.554729    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:17.554839    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:17.554997    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:17.555015    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:17.555018    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:17.555058    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:17.555146    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:17.555161    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:17.555164    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:17.555202    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:17.555295    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:17.705196    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:17.705239    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:17.705247    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:17.763862    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:17.763875    2707 retry.go:31] will retry after 140.102016ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:17.963336    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:17.963350    2707 retry.go:31] will retry after 309.954893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:18.333840    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:18.333851    2707 retry.go:31] will retry after 656.767167ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:19.050035    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:19.050069    2707 retry.go:31] will retry after 208.918953ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:19.261014    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:19.316731    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:19.316742    2707 retry.go:31] will retry after 206.561044ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:19.582864    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:19.582877    2707 retry.go:31] will retry after 247.937523ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:19.892024    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:19.892037    2707 retry.go:31] will retry after 310.484514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:20.264189    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:20.264200    2707 retry.go:31] will retry after 856.196413ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:21.180472    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:21.180509    2707 provision.go:87] duration metric: took 3.6258175s to configureAuth
	W1216 12:04:21.180512    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:21.180518    2707 retry.go:31] will retry after 2.403434ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:21.183538    2707 provision.go:84] configureAuth start
	I1216 12:04:21.183547    2707 provision.go:143] copyHostCerts
	I1216 12:04:21.183570    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:21.183599    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:21.183605    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:21.183688    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:21.183861    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:21.183878    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:21.183881    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:21.183921    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:21.184014    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:21.184029    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:21.184032    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:21.184068    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:21.184160    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:21.256979    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:21.257010    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:21.257017    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:21.314366    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:21.314378    2707 retry.go:31] will retry after 180.244192ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:21.552397    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:21.552409    2707 retry.go:31] will retry after 400.437311ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:22.013054    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:22.013064    2707 retry.go:31] will retry after 835.377136ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:22.910434    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:22.910472    2707 provision.go:87] duration metric: took 1.726921833s to configureAuth
	W1216 12:04:22.910477    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:22.910483    2707 retry.go:31] will retry after 4.527383ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:22.916152    2707 provision.go:84] configureAuth start
	I1216 12:04:22.916160    2707 provision.go:143] copyHostCerts
	I1216 12:04:22.916188    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:22.916222    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:22.916242    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:22.916367    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:22.916530    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:22.916547    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:22.916551    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:22.916591    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:22.916679    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:22.916694    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:22.916697    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:22.916738    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:22.916832    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:22.968418    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:22.968449    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:22.968463    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:23.021228    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:23.021239    2707 retry.go:31] will retry after 270.09893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:23.352067    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:23.352079    2707 retry.go:31] will retry after 218.238432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:23.630601    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:23.630615    2707 retry.go:31] will retry after 400.07648ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:24.090636    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:24.090647    2707 retry.go:31] will retry after 635.550974ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:24.786320    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:24.786357    2707 provision.go:87] duration metric: took 1.870193458s to configureAuth
	W1216 12:04:24.786360    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:24.786366    2707 retry.go:31] will retry after 6.349623ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:24.794316    2707 provision.go:84] configureAuth start
	I1216 12:04:24.794325    2707 provision.go:143] copyHostCerts
	I1216 12:04:24.794352    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:24.794389    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:24.794394    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:24.794490    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:24.794672    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:24.794689    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:24.794693    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:24.794733    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:24.794826    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:24.794842    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:24.794845    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:24.794883    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:24.794974    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:24.840195    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:24.840236    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:24.840243    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:24.895866    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:24.895878    2707 retry.go:31] will retry after 347.076674ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:25.302278    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:25.302289    2707 retry.go:31] will retry after 533.384352ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:25.889228    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:25.889239    2707 retry.go:31] will retry after 328.343601ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:26.275142    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:26.275175    2707 retry.go:31] will retry after 258.282668ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:26.535483    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:26.590939    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:26.590950    2707 retry.go:31] will retry after 138.830082ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:26.789620    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:26.789632    2707 retry.go:31] will retry after 288.140593ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:27.133127    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:27.133139    2707 retry.go:31] will retry after 324.144746ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:27.516492    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:27.516530    2707 provision.go:87] duration metric: took 2.7221985s to configureAuth
	W1216 12:04:27.516537    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:27.516543    2707 retry.go:31] will retry after 10.888569ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:27.529447    2707 provision.go:84] configureAuth start
	I1216 12:04:27.529461    2707 provision.go:143] copyHostCerts
	I1216 12:04:27.529477    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:27.529506    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:27.529511    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:27.529638    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:27.529805    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:27.529822    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:27.529825    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:27.529869    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:27.529955    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:27.529971    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:27.529974    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:27.530010    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:27.530101    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:27.672977    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:27.673015    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:27.673023    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:27.730136    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:27.730149    2707 retry.go:31] will retry after 244.060417ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:28.034776    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:28.034788    2707 retry.go:31] will retry after 317.954263ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:28.413582    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:28.413593    2707 retry.go:31] will retry after 783.619263ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:29.258318    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:29.258349    2707 retry.go:31] will retry after 205.664932ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:29.466038    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:29.525251    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:29.525266    2707 retry.go:31] will retry after 143.544009ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:29.729716    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:29.729728    2707 retry.go:31] will retry after 437.598019ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:30.226842    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:30.226854    2707 retry.go:31] will retry after 413.597741ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:30.697449    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:30.697486    2707 provision.go:87] duration metric: took 3.168016584s to configureAuth
	W1216 12:04:30.697491    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:30.697495    2707 retry.go:31] will retry after 13.709862ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:30.713230    2707 provision.go:84] configureAuth start
	I1216 12:04:30.713246    2707 provision.go:143] copyHostCerts
	I1216 12:04:30.713266    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:30.713301    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:30.713309    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:30.713397    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:30.713597    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:30.713614    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:30.713617    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:30.713661    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:30.713759    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:30.713775    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:30.713778    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:30.713815    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:30.713921    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:30.957933    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:30.957992    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:30.958003    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:31.019827    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:31.019839    2707 retry.go:31] will retry after 276.323505ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:31.351232    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:31.351247    2707 retry.go:31] will retry after 535.963877ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:31.949562    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:31.949573    2707 retry.go:31] will retry after 382.121842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:32.391356    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:32.391389    2707 retry.go:31] will retry after 301.403985ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:32.694841    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:32.749249    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:32.749261    2707 retry.go:31] will retry after 312.127517ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:33.120298    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:33.120313    2707 retry.go:31] will retry after 381.449686ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:33.561039    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:33.561051    2707 retry.go:31] will retry after 517.169676ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:34.137347    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:34.137389    2707 provision.go:87] duration metric: took 3.424133917s to configureAuth
	W1216 12:04:34.137392    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:34.137396    2707 retry.go:31] will retry after 26.670573ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:34.166081    2707 provision.go:84] configureAuth start
	I1216 12:04:34.166098    2707 provision.go:143] copyHostCerts
	I1216 12:04:34.166114    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:34.166148    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:34.166153    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:34.166281    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:34.166471    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:34.166492    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:34.166495    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:34.166540    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:34.166636    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:34.166653    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:34.166656    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:34.166695    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:34.166791    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:34.269407    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:34.269446    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:34.269454    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:34.326103    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:34.326116    2707 retry.go:31] will retry after 309.315191ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:34.698144    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:34.698155    2707 retry.go:31] will retry after 446.611948ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:35.206672    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:35.206686    2707 retry.go:31] will retry after 763.013763ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:36.031975    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:36.032012    2707 provision.go:87] duration metric: took 1.86591025s to configureAuth
	W1216 12:04:36.032015    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:36.032021    2707 retry.go:31] will retry after 28.029689ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:36.062098    2707 provision.go:84] configureAuth start
	I1216 12:04:36.062106    2707 provision.go:143] copyHostCerts
	I1216 12:04:36.062128    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:36.062157    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:36.062162    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:36.062244    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:36.062410    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:36.062427    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:36.062430    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:36.062469    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:36.062556    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:36.062572    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:36.062576    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:36.062616    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:36.062705    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:36.184996    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:36.185033    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:36.185041    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:36.244497    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:36.244510    2707 retry.go:31] will retry after 358.441416ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:36.660753    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:36.660764    2707 retry.go:31] will retry after 402.230593ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:37.123181    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:37.123194    2707 retry.go:31] will retry after 767.411178ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:37.951314    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:37.951348    2707 retry.go:31] will retry after 195.727942ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:38.149125    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:38.205607    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:38.205619    2707 retry.go:31] will retry after 333.858208ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:38.601319    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:38.601332    2707 retry.go:31] will retry after 248.287501ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:38.910849    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:38.910861    2707 retry.go:31] will retry after 503.978517ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:39.474124    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:39.474163    2707 provision.go:87] duration metric: took 3.4120475s to configureAuth
	W1216 12:04:39.474168    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:39.474173    2707 retry.go:31] will retry after 39.019733ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:39.515213    2707 provision.go:84] configureAuth start
	I1216 12:04:39.515239    2707 provision.go:143] copyHostCerts
	I1216 12:04:39.515263    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:39.515306    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:39.515312    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:39.515431    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:39.515645    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:39.515664    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:39.515666    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:39.515720    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:39.515806    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:39.515821    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:39.515825    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:39.515865    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:39.515954    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:39.643680    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:39.643718    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:39.643727    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:39.701154    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:39.701167    2707 retry.go:31] will retry after 350.741634ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:40.114078    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:40.114090    2707 retry.go:31] will retry after 380.343756ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:40.556353    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:40.556366    2707 retry.go:31] will retry after 791.395898ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:41.408710    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:41.408746    2707 provision.go:87] duration metric: took 1.893504791s to configureAuth
	W1216 12:04:41.408749    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:41.408755    2707 retry.go:31] will retry after 69.612597ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:41.480404    2707 provision.go:84] configureAuth start
	I1216 12:04:41.480412    2707 provision.go:143] copyHostCerts
	I1216 12:04:41.480435    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:41.480465    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:41.480469    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:41.480561    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:41.480729    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:41.480745    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:41.480748    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:41.480791    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:41.480885    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:41.480901    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:41.480904    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:41.480939    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:41.481029    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:41.577574    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:41.577606    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:41.577614    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:41.636005    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:41.636014    2707 retry.go:31] will retry after 361.862645ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:42.055319    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:42.055333    2707 retry.go:31] will retry after 242.610132ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:42.354178    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:42.354190    2707 retry.go:31] will retry after 603.271628ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:43.017046    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:43.017078    2707 retry.go:31] will retry after 132.976013ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:43.152107    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:43.206236    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:43.206247    2707 retry.go:31] will retry after 233.826315ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:43.499341    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:43.499353    2707 retry.go:31] will retry after 430.102351ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:43.988979    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:43.988991    2707 retry.go:31] will retry after 491.020837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:44.540856    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:44.540894    2707 provision.go:87] duration metric: took 3.060473584s to configureAuth
	W1216 12:04:44.540898    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:44.540902    2707 retry.go:31] will retry after 65.598314ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:44.608517    2707 provision.go:84] configureAuth start
	I1216 12:04:44.608540    2707 provision.go:143] copyHostCerts
	I1216 12:04:44.608566    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:44.608606    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:44.608611    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:44.608772    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:44.608995    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:44.609018    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:44.609021    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:44.609065    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:44.609157    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:44.609175    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:44.609179    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:44.609216    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:44.609307    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:44.714621    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:44.714657    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:44.714665    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:44.772967    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:44.772977    2707 retry.go:31] will retry after 153.43851ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:44.986125    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:44.986136    2707 retry.go:31] will retry after 364.735378ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:45.412467    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:45.412479    2707 retry.go:31] will retry after 785.861001ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:46.259092    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:46.259122    2707 retry.go:31] will retry after 174.075341ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:46.435243    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:46.491418    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:46.491429    2707 retry.go:31] will retry after 231.776226ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:46.784154    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:46.784165    2707 retry.go:31] will retry after 558.558181ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:47.402142    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:47.402153    2707 retry.go:31] will retry after 292.095784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:47.753429    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:47.753466    2707 provision.go:87] duration metric: took 3.144919291s to configureAuth
	W1216 12:04:47.753469    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:47.753474    2707 retry.go:31] will retry after 142.070026ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:47.897568    2707 provision.go:84] configureAuth start
	I1216 12:04:47.897592    2707 provision.go:143] copyHostCerts
	I1216 12:04:47.897627    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:47.897669    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:47.897679    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:47.897833    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:47.898057    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:47.898077    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:47.898081    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:47.898129    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:47.898227    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:47.898245    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:47.898249    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:47.898288    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:47.898381    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:48.085587    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:48.085627    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:48.085634    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:48.139586    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:48.139599    2707 retry.go:31] will retry after 365.829003ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:48.567796    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:48.567807    2707 retry.go:31] will retry after 317.156598ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:48.947041    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:48.947052    2707 retry.go:31] will retry after 650.360981ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:49.659878    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:49.659911    2707 retry.go:31] will retry after 151.202038ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:49.813139    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:49.867894    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:49.867904    2707 retry.go:31] will retry after 218.169306ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:50.145492    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:50.145503    2707 retry.go:31] will retry after 354.045845ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:50.559034    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:50.559045    2707 retry.go:31] will retry after 665.687024ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:51.284513    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:51.284526    2707 retry.go:31] will retry after 515.363897ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:51.859290    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:51.859322    2707 provision.go:87] duration metric: took 3.961721333s to configureAuth
	W1216 12:04:51.859327    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:51.859331    2707 retry.go:31] will retry after 225.541401ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:52.086895    2707 provision.go:84] configureAuth start
	I1216 12:04:52.086904    2707 provision.go:143] copyHostCerts
	I1216 12:04:52.086928    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:52.086959    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:52.086964    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:52.087148    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:52.087616    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:52.087635    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:52.087640    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:52.087680    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:52.087781    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:52.087797    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:52.087802    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:52.087838    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:52.087940    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:52.164174    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:52.164206    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:52.164214    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:52.220791    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:52.220802    2707 retry.go:31] will retry after 363.05003ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:52.643116    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:52.643127    2707 retry.go:31] will retry after 391.228697ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:53.094295    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:53.094306    2707 retry.go:31] will retry after 456.093116ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:53.609896    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:53.609927    2707 retry.go:31] will retry after 282.168072ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:53.894126    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:53.947602    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:53.947615    2707 retry.go:31] will retry after 313.475156ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:54.321478    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:54.321489    2707 retry.go:31] will retry after 558.284116ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:54.940966    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:54.940978    2707 retry.go:31] will retry after 625.190721ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:55.624897    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:55.624934    2707 provision.go:87] duration metric: took 3.5393385s to configureAuth
	W1216 12:04:55.624937    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:55.624942    2707 retry.go:31] will retry after 483.383383ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:56.109925    2707 provision.go:84] configureAuth start
	I1216 12:04:56.109946    2707 provision.go:143] copyHostCerts
	I1216 12:04:56.109980    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:56.110021    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:04:56.110032    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:04:56.110172    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:04:56.110408    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:56.110427    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:04:56.110430    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:04:56.110474    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:04:56.110567    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:56.110585    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:04:56.110588    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:04:56.110634    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:04:56.110723    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:04:56.299762    2707 provision.go:177] copyRemoteCerts
	I1216 12:04:56.299806    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:04:56.299816    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:56.360585    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:56.360597    2707 retry.go:31] will retry after 244.581041ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:56.664036    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:56.664047    2707 retry.go:31] will retry after 439.452078ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:57.159226    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:57.159237    2707 retry.go:31] will retry after 628.640712ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:57.846402    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:57.846435    2707 retry.go:31] will retry after 171.121618ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:58.019481    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:04:58.076434    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:58.076447    2707 retry.go:31] will retry after 356.288228ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:58.491452    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:58.491464    2707 retry.go:31] will retry after 512.08999ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:59.060975    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:59.060987    2707 retry.go:31] will retry after 818.165621ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:04:59.939461    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:59.939498    2707 provision.go:87] duration metric: took 3.832416s to configureAuth
	W1216 12:04:59.939501    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:04:59.939506    2707 retry.go:31] will retry after 716.00596ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:00.657070    2707 provision.go:84] configureAuth start
	I1216 12:05:00.657080    2707 provision.go:143] copyHostCerts
	I1216 12:05:00.657108    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:00.657155    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:00.657161    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:00.657319    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:00.657519    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:00.657539    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:00.657541    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:00.657584    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:00.657699    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:00.657716    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:00.657720    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:00.657758    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:00.657851    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:00.890735    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:00.890789    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:00.890800    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:00.952036    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:00.952048    2707 retry.go:31] will retry after 362.653292ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:01.376494    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:01.376506    2707 retry.go:31] will retry after 328.973454ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:01.766918    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:01.766931    2707 retry.go:31] will retry after 431.126631ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:02.257826    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:02.257858    2707 retry.go:31] will retry after 242.06635ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:02.501833    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:02.557006    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:02.557017    2707 retry.go:31] will retry after 371.525038ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:02.986910    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:02.986922    2707 retry.go:31] will retry after 305.616922ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:03.351833    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:03.351845    2707 retry.go:31] will retry after 431.219821ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:03.842561    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:03.842598    2707 provision.go:87] duration metric: took 3.187325666s to configureAuth
	W1216 12:05:03.842604    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:03.842608    2707 retry.go:31] will retry after 680.01791ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:04.524301    2707 provision.go:84] configureAuth start
	I1216 12:05:04.524330    2707 provision.go:143] copyHostCerts
	I1216 12:05:04.524356    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:04.524401    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:04.524408    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:04.524573    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:04.524770    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:04.524790    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:04.524792    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:04.524835    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:04.524927    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:04.524944    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:04.524948    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:04.524986    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:04.525093    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:04.656964    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:04.657003    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:04.657012    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:04.718194    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:04.718208    2707 retry.go:31] will retry after 276.926463ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:05.053700    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:05.053711    2707 retry.go:31] will retry after 551.883115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:05.661055    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:05.661082    2707 retry.go:31] will retry after 645.201206ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:06.369479    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:06.369512    2707 retry.go:31] will retry after 156.330909ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:06.527801    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:06.581872    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:06.581883    2707 retry.go:31] will retry after 162.077508ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:06.803965    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:06.803976    2707 retry.go:31] will retry after 274.176432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:07.137044    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:07.137056    2707 retry.go:31] will retry after 329.584636ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:07.528420    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:07.528432    2707 retry.go:31] will retry after 933.531655ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:08.522527    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:08.522563    2707 provision.go:87] duration metric: took 3.999969917s to configureAuth
	W1216 12:05:08.522566    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:08.522570    2707 retry.go:31] will retry after 1.175059496s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:09.699246    2707 provision.go:84] configureAuth start
	I1216 12:05:09.699261    2707 provision.go:143] copyHostCerts
	I1216 12:05:09.699298    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:09.699358    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:09.699364    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:09.699518    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:09.699736    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:09.699771    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:09.699776    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:09.699833    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:09.699926    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:09.699956    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:09.699959    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:09.700018    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:09.700118    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:09.941808    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:09.941876    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:09.941889    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:10.000859    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:10.000870    2707 retry.go:31] will retry after 232.448843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:10.293722    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:10.293735    2707 retry.go:31] will retry after 248.194293ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:10.600241    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:10.600252    2707 retry.go:31] will retry after 780.965596ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:11.441573    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:11.441606    2707 retry.go:31] will retry after 153.236476ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:11.596818    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:11.650328    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:11.650339    2707 retry.go:31] will retry after 213.826124ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:11.924642    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:11.924653    2707 retry.go:31] will retry after 480.968783ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:12.465299    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:12.465310    2707 retry.go:31] will retry after 705.711593ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:13.230041    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:13.230077    2707 provision.go:87] duration metric: took 3.531924375s to configureAuth
	W1216 12:05:13.230080    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:13.230085    2707 retry.go:31] will retry after 2.404139884s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:15.635647    2707 provision.go:84] configureAuth start
	I1216 12:05:15.635667    2707 provision.go:143] copyHostCerts
	I1216 12:05:15.635753    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:15.635830    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:15.635837    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:15.635995    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:15.636197    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:15.636216    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:15.636219    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:15.636270    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:15.636363    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:15.636380    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:15.636382    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:15.636424    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:15.636526    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:15.720234    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:15.720280    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:15.720290    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:15.774760    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:15.774775    2707 retry.go:31] will retry after 190.073785ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:16.020343    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:16.020355    2707 retry.go:31] will retry after 389.182923ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:16.472749    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:16.472762    2707 retry.go:31] will retry after 522.942237ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:17.056105    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:17.056136    2707 retry.go:31] will retry after 142.324034ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:17.200479    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:17.255458    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:17.255468    2707 retry.go:31] will retry after 157.546501ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:17.471325    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:17.471337    2707 retry.go:31] will retry after 257.311276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:17.790054    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:17.790067    2707 retry.go:31] will retry after 382.740828ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:18.231270    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:18.231307    2707 provision.go:87] duration metric: took 2.596216208s to configureAuth
	W1216 12:05:18.231311    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:18.231319    2707 retry.go:31] will retry after 2.521955422s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:20.754849    2707 provision.go:84] configureAuth start
	I1216 12:05:20.754894    2707 provision.go:143] copyHostCerts
	I1216 12:05:20.754928    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:20.754972    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:20.754979    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:20.755062    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:20.755232    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:20.755252    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:20.755255    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:20.755302    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:20.755391    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:20.755428    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:20.755434    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:20.755491    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:20.755607    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:20.911912    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:20.911953    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:20.911963    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:20.968206    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:20.968222    2707 retry.go:31] will retry after 275.29081ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:21.302710    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:21.302722    2707 retry.go:31] will retry after 534.198516ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:21.896323    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:21.896335    2707 retry.go:31] will retry after 635.91849ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:22.592653    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:22.592683    2707 retry.go:31] will retry after 232.718309ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:22.827423    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:22.885951    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:22.885963    2707 retry.go:31] will retry after 284.509464ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:23.230044    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:23.230056    2707 retry.go:31] will retry after 405.67349ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:23.696750    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:23.696761    2707 retry.go:31] will retry after 658.104187ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:24.417300    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:24.417338    2707 provision.go:87] duration metric: took 3.663027125s to configureAuth
	W1216 12:05:24.417342    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:24.417346    2707 retry.go:31] will retry after 3.842984053s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:28.261884    2707 provision.go:84] configureAuth start
	I1216 12:05:28.261897    2707 provision.go:143] copyHostCerts
	I1216 12:05:28.261944    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:28.261993    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:28.262000    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:28.262342    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:28.262501    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:28.262519    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:28.262523    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:28.262564    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:28.262661    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:28.262678    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:28.262681    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:28.262724    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:28.262820    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:28.468941    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:28.468984    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:28.468996    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:28.528832    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:28.528843    2707 retry.go:31] will retry after 260.682744ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:28.847285    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:28.847297    2707 retry.go:31] will retry after 308.425417ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:29.218378    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:29.218391    2707 retry.go:31] will retry after 540.665075ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:29.819373    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:29.819388    2707 retry.go:31] will retry after 488.198311ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:30.368538    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:30.368575    2707 provision.go:87] duration metric: took 2.106888875s to configureAuth
	W1216 12:05:30.368578    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:30.368583    2707 retry.go:31] will retry after 5.644826458s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:36.014987    2707 provision.go:84] configureAuth start
	I1216 12:05:36.014995    2707 provision.go:143] copyHostCerts
	I1216 12:05:36.015035    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:36.015089    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:36.015096    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:36.015686    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:36.015840    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:36.015859    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:36.015862    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:36.015913    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:36.016010    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:36.016025    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:36.016028    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:36.016077    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:36.016170    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:36.101334    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:36.101362    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:36.101370    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:36.158398    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:36.158414    2707 retry.go:31] will retry after 333.633105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:36.548758    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:36.548770    2707 retry.go:31] will retry after 255.987637ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:36.867058    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:36.867072    2707 retry.go:31] will retry after 582.489379ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:37.511203    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:37.511215    2707 retry.go:31] will retry after 567.450789ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:38.141461    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:38.141498    2707 provision.go:87] duration metric: took 2.126632792s to configureAuth
	W1216 12:05:38.141503    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:38.141508    2707 retry.go:31] will retry after 12.777420097s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:50.920492    2707 provision.go:84] configureAuth start
	I1216 12:05:50.920515    2707 provision.go:143] copyHostCerts
	I1216 12:05:50.920564    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:50.920635    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:05:50.920641    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:05:50.920820    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:05:50.921039    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:50.921069    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:05:50.921072    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:05:50.921137    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:05:50.921232    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:50.921261    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:05:50.921265    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:05:50.921317    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:05:50.921414    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:05:50.987367    2707 provision.go:177] copyRemoteCerts
	I1216 12:05:50.987418    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:05:50.987431    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:51.044653    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:51.044667    2707 retry.go:31] will retry after 210.717069ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:51.318012    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:51.318024    2707 retry.go:31] will retry after 374.643142ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:51.754525    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:51.754537    2707 retry.go:31] will retry after 494.450807ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:52.308512    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:52.308524    2707 retry.go:31] will retry after 439.102056ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:52.808263    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:52.808302    2707 retry.go:31] will retry after 131.009233ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:52.941360    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:05:52.997149    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:52.997162    2707 retry.go:31] will retry after 128.857256ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:53.184004    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:53.184016    2707 retry.go:31] will retry after 325.050102ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:53.569044    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:53.569056    2707 retry.go:31] will retry after 484.303977ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:54.109802    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:54.109814    2707 retry.go:31] will retry after 825.148576ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:54.994383    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:54.994421    2707 provision.go:87] duration metric: took 4.073999291s to configureAuth
	W1216 12:05:54.994426    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:54.994431    2707 buildroot.go:185] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:54.994440    2707 machine.go:96] duration metric: took 11m12.846800459s to provisionDockerMachine
	I1216 12:05:54.994446    2707 fix.go:56] duration metric: took 11m12.858267167s for fixHost
	I1216 12:05:54.994449    2707 start.go:83] releasing machines lock for "ha-922000", held for 11m12.858275791s
	W1216 12:05:54.994455    2707 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:05:54.994484    2707 out.go:270] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:05:54.994488    2707 start.go:729] Will try again in 5 seconds ...
	I1216 12:05:59.996466    2707 start.go:360] acquireMachinesLock for ha-922000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:05:59.996580    2707 start.go:364] duration metric: took 90.208µs to acquireMachinesLock for "ha-922000"
	I1216 12:05:59.996613    2707 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:05:59.996618    2707 fix.go:54] fixHost starting: 
	I1216 12:05:59.997283    2707 fix.go:112] recreateIfNeeded on ha-922000: state=Running err=<nil>
	W1216 12:05:59.997291    2707 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:06:00.001540    2707 out.go:177] * Updating the running qemu2 "ha-922000" VM ...
	I1216 12:06:00.003073    2707 machine.go:93] provisionDockerMachine start ...
	I1216 12:06:00.003122    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 12:06:00.003217    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 12:06:00.003222    2707 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 12:06:00.059591    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:03.119427    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:06.179878    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:09.239995    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:12.299316    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:15.360153    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:18.418404    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:21.475599    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:24.537380    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:27.597333    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:30.655211    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:33.715613    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:36.775590    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:39.834946    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:42.894097    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:45.953475    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:49.015609    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:52.073405    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:55.132842    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:06:58.189944    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:01.249789    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:04.309061    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:07.368847    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:10.429619    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:13.487614    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:16.547906    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:19.607286    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:22.669506    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:25.728581    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:28.789098    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:31.845653    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:34.905915    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:37.966243    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:41.026315    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:44.086293    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:47.145713    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:50.207587    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:53.266877    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:56.326368    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:07:59.387362    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:02.446301    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:05.504473    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:08.564183    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:11.622713    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:14.685833    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:17.746043    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:20.805857    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:23.866757    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:26.918779    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:29.979166    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:33.039858    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:36.099379    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:39.158304    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:42.217106    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:45.276942    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:48.337964    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:51.397411    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:54.458454    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:08:57.520382    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:00.578692    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:03.580722    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:09:03.580735    2707 buildroot.go:166] provisioning hostname "ha-922000"
	I1216 12:09:03.580801    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 12:09:03.580969    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 12:09:03.580974    2707 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-922000 && echo "ha-922000" | sudo tee /etc/hostname
	I1216 12:09:03.636471    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:06.699190    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:09.758670    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:12.812431    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:15.872317    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:18.934094    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:21.995308    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:25.055315    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:28.115847    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:31.175967    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:34.238213    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:37.297500    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:40.354450    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:43.416190    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:46.476393    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:49.537495    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:52.591706    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:55.653012    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:09:58.712925    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:01.769318    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:04.830116    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:07.889832    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:10.951045    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:14.010267    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:17.069370    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:20.128002    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:23.184016    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:26.245966    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:29.304958    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:32.366161    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:35.425713    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:38.485594    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:41.547550    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:44.608262    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:47.669967    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:50.731237    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:53.792034    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:56.854166    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:10:59.912496    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:02.968014    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:06.028178    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:09.087988    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:12.147913    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:15.209570    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:18.269689    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:21.332890    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:24.391669    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:27.454096    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:30.514052    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:33.575640    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:36.636526    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:39.694658    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:42.755988    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:45.815462    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:48.875333    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:51.936742    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:54.996563    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:11:58.056636    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:01.112138    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:04.173345    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:07.175397    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:12:07.175472    2707 main.go:141] libmachine: Using SSH client type: native
	I1216 12:12:07.175658    2707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046cf1b0] 0x1046d19f0 <nil>  [] 0s}  22 <nil> <nil>}
	I1216 12:12:07.175665    2707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-922000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-922000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-922000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 12:12:07.232286    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:10.293561    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:13.356427    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:16.418698    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:19.479205    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:22.537478    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:25.596354    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:28.653827    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:31.713273    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:34.774023    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:37.832675    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:40.893844    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:43.954836    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:47.013416    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:50.072014    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:53.133181    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:56.189708    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:12:59.251320    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:02.310427    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:05.371522    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:08.431127    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:11.491911    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:14.553001    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:17.614314    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:20.673067    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:23.731772    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:26.793549    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:29.854363    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:32.913886    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:35.974215    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:39.034629    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:42.094943    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:45.154248    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:48.214971    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:51.275991    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:54.338453    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:13:57.401083    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:00.460114    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:03.518623    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:06.581408    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:09.642808    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:12.702068    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:15.760642    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:18.822899    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:21.882970    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:24.943107    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:28.005017    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:31.064801    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:34.124237    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:37.186275    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:40.246960    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:43.304683    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:46.362183    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:49.421945    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:52.481330    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:55.536592    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:14:58.596196    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:01.656905    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:04.716512    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:07.775129    2707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:10.777229    2707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:15:10.777254    2707 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20091-990/.minikube CaCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20091-990/.minikube}
	I1216 12:15:10.777269    2707 buildroot.go:174] setting up certificates
	I1216 12:15:10.777277    2707 provision.go:84] configureAuth start
	I1216 12:15:10.777284    2707 provision.go:143] copyHostCerts
	I1216 12:15:10.777316    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:10.777377    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:10.777387    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:10.777536    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:10.777754    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:10.777787    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:10.777791    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:10.777858    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:10.777954    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:10.777984    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:10.777990    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:10.778050    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:10.778152    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:10.860684    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:10.860726    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:10.860736    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:10.920878    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:10.920890    2707 retry.go:31] will retry after 179.064946ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:11.159575    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:11.159590    2707 retry.go:31] will retry after 460.709948ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:11.681448    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:11.681461    2707 retry.go:31] will retry after 406.337097ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:12.146497    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:12.146509    2707 retry.go:31] will retry after 713.966896ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:12.920059    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:12.920095    2707 provision.go:87] duration metric: took 2.142815666s to configureAuth
	W1216 12:15:12.920099    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:12.920103    2707 retry.go:31] will retry after 146.518µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:12.920293    2707 provision.go:84] configureAuth start
	I1216 12:15:12.920300    2707 provision.go:143] copyHostCerts
	I1216 12:15:12.920322    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:12.920349    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:12.920355    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:12.920452    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:12.920606    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:12.920624    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:12.920627    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:12.920669    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:12.920760    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:12.920777    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:12.920780    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:12.920827    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:12.920919    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:12.991679    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:12.991725    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:12.991735    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:13.049324    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:13.049335    2707 retry.go:31] will retry after 287.678515ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:13.397224    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:13.397235    2707 retry.go:31] will retry after 310.167197ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:13.767562    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:13.767574    2707 retry.go:31] will retry after 443.629866ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:14.271571    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:14.271606    2707 retry.go:31] will retry after 189.213202ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:14.462844    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:14.521387    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:14.521398    2707 retry.go:31] will retry after 306.227518ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:14.886798    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:14.886812    2707 retry.go:31] will retry after 389.720085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:15.336355    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:15.336366    2707 retry.go:31] will retry after 422.016101ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:15.818034    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:15.818073    2707 provision.go:87] duration metric: took 2.897775834s to configureAuth
	W1216 12:15:15.818077    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:15.818082    2707 retry.go:31] will retry after 161.914µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:15.818284    2707 provision.go:84] configureAuth start
	I1216 12:15:15.818291    2707 provision.go:143] copyHostCerts
	I1216 12:15:15.818306    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:15.818334    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:15.818339    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:15.818417    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:15.818572    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:15.818602    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:15.818607    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:15.818647    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:15.818739    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:15.818758    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:15.818760    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:15.818807    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:15.818908    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:15.876538    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:15.876578    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:15.876587    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:15.935195    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:15.935209    2707 retry.go:31] will retry after 247.716605ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:16.243826    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:16.243837    2707 retry.go:31] will retry after 289.420461ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:16.595185    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:16.595197    2707 retry.go:31] will retry after 436.502562ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:17.089596    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:17.089624    2707 retry.go:31] will retry after 149.651727ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:17.241302    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:17.294151    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:17.294162    2707 retry.go:31] will retry after 125.636837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:17.476208    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:17.476219    2707 retry.go:31] will retry after 512.052976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:18.048391    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:18.048403    2707 retry.go:31] will retry after 476.115454ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:18.586388    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:18.586424    2707 provision.go:87] duration metric: took 2.768134583s to configureAuth
	W1216 12:15:18.586427    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:18.586433    2707 retry.go:31] will retry after 221.051µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:18.586734    2707 provision.go:84] configureAuth start
	I1216 12:15:18.586755    2707 provision.go:143] copyHostCerts
	I1216 12:15:18.586808    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:18.586877    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:18.586887    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:18.587084    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:18.587296    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:18.587318    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:18.587322    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:18.587373    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:18.587471    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:18.587488    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:18.587492    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:18.587533    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:18.587631    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:18.655666    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:18.655698    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:18.655705    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:18.711053    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:18.711065    2707 retry.go:31] will retry after 303.29367ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:19.073980    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:19.073993    2707 retry.go:31] will retry after 191.090611ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:19.327041    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:19.327054    2707 retry.go:31] will retry after 782.055788ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:20.170592    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:20.170624    2707 retry.go:31] will retry after 185.888967ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:20.358534    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:20.415322    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:20.415333    2707 retry.go:31] will retry after 374.614094ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:20.852379    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:20.852391    2707 retry.go:31] will retry after 292.598878ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:21.207089    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:21.207101    2707 retry.go:31] will retry after 647.33525ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:21.914216    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:21.914255    2707 provision.go:87] duration metric: took 3.32750575s to configureAuth
	W1216 12:15:21.914261    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:21.914266    2707 retry.go:31] will retry after 290.767µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:21.914643    2707 provision.go:84] configureAuth start
	I1216 12:15:21.914652    2707 provision.go:143] copyHostCerts
	I1216 12:15:21.914666    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:21.914695    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:21.914701    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:21.914822    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:21.914979    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:21.914999    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:21.915003    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:21.915047    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:21.915151    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:21.915167    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:21.915172    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:21.915211    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:21.915298    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:22.092895    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:22.092938    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:22.092947    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:22.146055    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:22.146067    2707 retry.go:31] will retry after 302.468507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:22.509092    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:22.509103    2707 retry.go:31] will retry after 305.435979ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:22.873335    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:22.873347    2707 retry.go:31] will retry after 701.438697ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:23.635784    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:23.635817    2707 retry.go:31] will retry after 267.631895ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:23.905479    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:23.964098    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:23.964109    2707 retry.go:31] will retry after 175.092441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:24.195214    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:24.195227    2707 retry.go:31] will retry after 280.330853ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:24.534468    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:24.534480    2707 retry.go:31] will retry after 714.51054ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:25.309688    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:25.309699    2707 retry.go:31] will retry after 425.680028ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:25.798104    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:25.798145    2707 provision.go:87] duration metric: took 3.88349475s to configureAuth
	W1216 12:15:25.798152    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:25.798156    2707 retry.go:31] will retry after 383.436µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:25.798646    2707 provision.go:84] configureAuth start
	I1216 12:15:25.798662    2707 provision.go:143] copyHostCerts
	I1216 12:15:25.798675    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:25.798702    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:25.798706    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:25.798784    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:25.798938    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:25.798958    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:25.798960    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:25.799002    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:25.799105    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:25.799122    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:25.799125    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:25.799163    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:25.799263    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:25.858310    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:25.858340    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:25.858352    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:25.912282    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:25.912295    2707 retry.go:31] will retry after 292.277728ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:26.266100    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:26.266110    2707 retry.go:31] will retry after 440.466227ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:26.769956    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:26.769968    2707 retry.go:31] will retry after 403.30359ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:27.233046    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:27.233078    2707 retry.go:31] will retry after 363.227755ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:27.598331    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:27.654166    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:27.654177    2707 retry.go:31] will retry after 330.427389ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:28.043135    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:28.043145    2707 retry.go:31] will retry after 556.23015ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:28.659502    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:28.659513    2707 retry.go:31] will retry after 469.250641ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:29.189248    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.189284    2707 provision.go:87] duration metric: took 3.390624833s to configureAuth
	W1216 12:15:29.189288    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.189292    2707 retry.go:31] will retry after 1.029958ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.190592    2707 provision.go:84] configureAuth start
	I1216 12:15:29.190600    2707 provision.go:143] copyHostCerts
	I1216 12:15:29.190621    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:29.190649    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:29.190654    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:29.190751    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:29.190895    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:29.190914    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:29.190917    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:29.190960    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:29.191047    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:29.191063    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:29.191065    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:29.191104    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:29.191193    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:29.259142    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:29.259179    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:29.259187    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:29.318336    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.318347    2707 retry.go:31] will retry after 135.321503ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:29.513393    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.513402    2707 retry.go:31] will retry after 259.67647ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:29.834410    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:29.834422    2707 retry.go:31] will retry after 833.527715ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:30.727754    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:30.727785    2707 retry.go:31] will retry after 313.633209ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:31.043443    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:31.098648    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:31.098661    2707 retry.go:31] will retry after 127.749698ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:31.284614    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:31.284626    2707 retry.go:31] will retry after 534.505257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:31.880734    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:31.880749    2707 retry.go:31] will retry after 315.231913ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:32.251251    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:32.251288    2707 provision.go:87] duration metric: took 3.060692292s to configureAuth
	W1216 12:15:32.251292    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:32.251297    2707 retry.go:31] will retry after 1.354803ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:32.253031    2707 provision.go:84] configureAuth start
	I1216 12:15:32.253037    2707 provision.go:143] copyHostCerts
	I1216 12:15:32.253054    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:32.253083    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:32.253088    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:32.253193    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:32.253355    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:32.253372    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:32.253376    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:32.253415    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:32.253504    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:32.253520    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:32.253524    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:32.253561    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:32.253651    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:32.444890    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:32.444933    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:32.444944    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:32.503721    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:32.503735    2707 retry.go:31] will retry after 303.617357ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:32.869115    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:32.869128    2707 retry.go:31] will retry after 529.573768ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:33.457398    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:33.457409    2707 retry.go:31] will retry after 755.414139ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:34.270848    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.270882    2707 provision.go:87] duration metric: took 2.017847166s to configureAuth
	W1216 12:15:34.270886    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.270890    2707 retry.go:31] will retry after 1.515697ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.272798    2707 provision.go:84] configureAuth start
	I1216 12:15:34.272804    2707 provision.go:143] copyHostCerts
	I1216 12:15:34.272827    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:34.272855    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:34.272859    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:34.272963    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:34.273116    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:34.273135    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:34.273138    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:34.273177    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:34.273278    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:34.273293    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:34.273296    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:34.273333    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:34.273429    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:34.372986    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:34.373021    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:34.373030    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:34.429608    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.429619    2707 retry.go:31] will retry after 127.013554ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:34.614969    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.614985    2707 retry.go:31] will retry after 197.630129ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:34.871771    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:34.871783    2707 retry.go:31] will retry after 693.932012ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:35.621930    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:35.621963    2707 retry.go:31] will retry after 269.456872ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:35.893475    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:35.950289    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:35.950300    2707 retry.go:31] will retry after 336.550646ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:36.346492    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:36.346503    2707 retry.go:31] will retry after 305.608178ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:36.714138    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:36.714150    2707 retry.go:31] will retry after 735.444698ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:37.508328    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:37.508366    2707 provision.go:87] duration metric: took 3.23556325s to configureAuth
	W1216 12:15:37.508370    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:37.508375    2707 retry.go:31] will retry after 2.234466ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:37.511182    2707 provision.go:84] configureAuth start
	I1216 12:15:37.511194    2707 provision.go:143] copyHostCerts
	I1216 12:15:37.511217    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:37.511246    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:37.511250    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:37.511343    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:37.511497    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:37.511513    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:37.511516    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:37.511565    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:37.511654    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:37.511670    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:37.511673    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:37.511714    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:37.511810    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:37.568433    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:37.568463    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:37.568470    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:37.622314    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:37.622326    2707 retry.go:31] will retry after 126.107124ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:37.807014    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:37.807025    2707 retry.go:31] will retry after 426.976668ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:38.296616    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:38.296627    2707 retry.go:31] will retry after 361.522058ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:38.719940    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:38.719950    2707 retry.go:31] will retry after 702.457174ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:39.484130    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:39.484167    2707 provision.go:87] duration metric: took 1.972974458s to configureAuth
	W1216 12:15:39.484170    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:39.484175    2707 retry.go:31] will retry after 4.132211ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:39.489347    2707 provision.go:84] configureAuth start
	I1216 12:15:39.489354    2707 provision.go:143] copyHostCerts
	I1216 12:15:39.489382    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:39.489415    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:39.489419    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:39.489517    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:39.489688    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:39.489706    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:39.489709    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:39.489752    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:39.489839    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:39.489854    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:39.489858    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:39.489896    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:39.490003    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:39.601666    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:39.601703    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:39.601712    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:39.660651    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:39.660661    2707 retry.go:31] will retry after 166.694513ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:39.887956    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:39.887967    2707 retry.go:31] will retry after 367.548101ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:40.315271    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:40.315282    2707 retry.go:31] will retry after 429.091314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:40.804806    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:40.804836    2707 retry.go:31] will retry after 358.599954ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:41.165464    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:41.220639    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:41.220650    2707 retry.go:31] will retry after 157.813444ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:41.438914    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:41.438925    2707 retry.go:31] will retry after 317.701666ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:41.817994    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:41.818005    2707 retry.go:31] will retry after 790.182629ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:42.669344    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:42.669380    2707 provision.go:87] duration metric: took 3.180028834s to configureAuth
	W1216 12:15:42.669384    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:42.669388    2707 retry.go:31] will retry after 6.05018ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:42.675874    2707 provision.go:84] configureAuth start
	I1216 12:15:42.675884    2707 provision.go:143] copyHostCerts
	I1216 12:15:42.675901    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:42.675930    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:42.675935    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:42.676033    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:42.676188    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:42.676204    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:42.676208    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:42.676248    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:42.676339    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:42.676355    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:42.676358    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:42.676395    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:42.676489    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:42.751675    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:42.751704    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:42.751710    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:42.808730    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:42.808743    2707 retry.go:31] will retry after 194.839558ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:43.063231    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:43.063243    2707 retry.go:31] will retry after 256.394296ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:43.381432    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:43.381444    2707 retry.go:31] will retry after 508.079479ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:43.948879    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:43.948892    2707 retry.go:31] will retry after 463.049356ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:44.472491    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:44.472526    2707 retry.go:31] will retry after 163.08374ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:44.637645    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:44.695493    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:44.695504    2707 retry.go:31] will retry after 259.356696ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:45.017260    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:45.017271    2707 retry.go:31] will retry after 475.886349ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:45.554540    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:45.554552    2707 retry.go:31] will retry after 544.559765ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:46.161364    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:46.161402    2707 provision.go:87] duration metric: took 3.485519708s to configureAuth
	W1216 12:15:46.161405    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:46.161415    2707 retry.go:31] will retry after 7.624639ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:46.170983    2707 provision.go:84] configureAuth start
	I1216 12:15:46.170989    2707 provision.go:143] copyHostCerts
	I1216 12:15:46.171017    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:46.171068    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:46.171073    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:46.171175    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:46.171355    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:46.171385    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:46.171390    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:46.171445    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:46.171532    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:46.171561    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:46.171564    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:46.171615    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:46.171713    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:46.246138    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:46.246171    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:46.246179    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:46.304473    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:46.304485    2707 retry.go:31] will retry after 319.729746ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:46.684352    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:46.684365    2707 retry.go:31] will retry after 423.530608ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:47.167056    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:47.167068    2707 retry.go:31] will retry after 716.334869ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:47.945354    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:47.945387    2707 retry.go:31] will retry after 293.851589ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:48.241281    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:48.296631    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:48.296642    2707 retry.go:31] will retry after 195.619775ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:48.551613    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:48.551624    2707 retry.go:31] will retry after 522.687131ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:49.136314    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:49.136326    2707 retry.go:31] will retry after 406.885027ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:49.603803    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:49.603840    2707 provision.go:87] duration metric: took 3.432853917s to configureAuth
	W1216 12:15:49.603844    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:49.603848    2707 retry.go:31] will retry after 8.936584ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:49.614831    2707 provision.go:84] configureAuth start
	I1216 12:15:49.614841    2707 provision.go:143] copyHostCerts
	I1216 12:15:49.614864    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:49.614893    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:49.614898    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:49.614978    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:49.615133    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:49.615149    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:49.615153    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:49.615192    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:49.615279    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:49.615295    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:49.615299    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:49.615334    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:49.615422    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:49.784481    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:49.784531    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:49.784540    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:49.842186    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:49.842199    2707 retry.go:31] will retry after 319.933796ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:50.223316    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:50.223328    2707 retry.go:31] will retry after 380.385067ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:50.664691    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:50.664706    2707 retry.go:31] will retry after 414.527574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:51.138550    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:51.138582    2707 retry.go:31] will retry after 190.343312ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:51.330643    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:51.384975    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:51.384987    2707 retry.go:31] will retry after 130.128666ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:51.575612    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:51.575624    2707 retry.go:31] will retry after 396.486633ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:52.031877    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.031890    2707 retry.go:31] will retry after 299.842934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:52.392860    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.392898    2707 provision.go:87] duration metric: took 2.778059s to configureAuth
	W1216 12:15:52.392901    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.392912    2707 retry.go:31] will retry after 27.787768ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.422716    2707 provision.go:84] configureAuth start
	I1216 12:15:52.422725    2707 provision.go:143] copyHostCerts
	I1216 12:15:52.422756    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:52.422786    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:52.422790    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:52.422904    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:52.423066    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:52.423083    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:52.423086    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:52.423126    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:52.423219    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:52.423235    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:52.423238    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:52.423274    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:52.423370    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:52.579739    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:52.579784    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:52.579794    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:52.637509    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.637521    2707 retry.go:31] will retry after 252.578882ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:52.949738    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:52.949750    2707 retry.go:31] will retry after 412.637687ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:53.421934    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:53.421951    2707 retry.go:31] will retry after 468.042895ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:53.950600    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:53.950611    2707 retry.go:31] will retry after 618.294218ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:54.631818    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:54.631855    2707 provision.go:87] duration metric: took 2.209132667s to configureAuth
	W1216 12:15:54.631858    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:54.631862    2707 retry.go:31] will retry after 37.287953ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:54.671166    2707 provision.go:84] configureAuth start
	I1216 12:15:54.671183    2707 provision.go:143] copyHostCerts
	I1216 12:15:54.671215    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:54.671248    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:54.671253    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:54.671351    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:54.671541    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:54.671559    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:54.671562    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:54.671603    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:54.671697    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:54.671713    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:54.671716    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:54.671755    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:54.671849    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:54.784351    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:54.784395    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:54.784404    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:54.842027    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:54.842039    2707 retry.go:31] will retry after 232.490381ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:55.133631    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:55.133645    2707 retry.go:31] will retry after 537.660488ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:55.734105    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:55.734116    2707 retry.go:31] will retry after 579.101011ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:56.373515    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:56.373547    2707 retry.go:31] will retry after 271.808744ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:56.647385    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:56.706455    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:56.706464    2707 retry.go:31] will retry after 132.973593ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:56.896194    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:56.896208    2707 retry.go:31] will retry after 463.085043ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:57.415718    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:57.415731    2707 retry.go:31] will retry after 513.727915ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:57.987816    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:57.987849    2707 provision.go:87] duration metric: took 3.316678s to configureAuth
	W1216 12:15:57.987853    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:57.987857    2707 retry.go:31] will retry after 58.312388ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:58.048186    2707 provision.go:84] configureAuth start
	I1216 12:15:58.048198    2707 provision.go:143] copyHostCerts
	I1216 12:15:58.048237    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:58.048272    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:15:58.048278    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:15:58.048393    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:15:58.048568    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:58.048588    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:15:58.048591    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:15:58.048637    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:15:58.048741    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:58.048757    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:15:58.048760    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:15:58.048801    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:15:58.048890    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:15:58.076560    2707 provision.go:177] copyRemoteCerts
	I1216 12:15:58.076602    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:15:58.076610    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:58.131289    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:58.131301    2707 retry.go:31] will retry after 172.355314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:58.363182    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:58.363194    2707 retry.go:31] will retry after 203.953365ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:58.627002    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:58.627014    2707 retry.go:31] will retry after 322.22514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:59.008513    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:59.008547    2707 retry.go:31] will retry after 316.884516ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:59.327458    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:15:59.386158    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:59.386172    2707 retry.go:31] will retry after 250.280251ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:15:59.696378    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:15:59.696391    2707 retry.go:31] will retry after 401.788257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:00.161213    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:00.161226    2707 retry.go:31] will retry after 798.20241ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:01.021115    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.021152    2707 provision.go:87] duration metric: took 2.972959584s to configureAuth
	W1216 12:16:01.021156    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.021160    2707 retry.go:31] will retry after 86.734544ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.108353    2707 provision.go:84] configureAuth start
	I1216 12:16:01.108361    2707 provision.go:143] copyHostCerts
	I1216 12:16:01.108382    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:01.108435    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:16:01.108440    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:01.108546    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:16:01.108734    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:01.108752    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:16:01.108755    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:01.108797    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:16:01.108885    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:01.108901    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:16:01.108904    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:01.108941    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:16:01.109038    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:16:01.165063    2707 provision.go:177] copyRemoteCerts
	I1216 12:16:01.165108    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:16:01.165118    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:01.222683    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.222696    2707 retry.go:31] will retry after 263.805775ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:01.548185    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.548196    2707 retry.go:31] will retry after 217.097363ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:01.820963    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:01.820973    2707 retry.go:31] will retry after 830.390204ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:02.710637    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:02.710670    2707 retry.go:31] will retry after 313.721376ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:03.026429    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:03.079596    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:03.079607    2707 retry.go:31] will retry after 355.704426ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:03.497417    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:03.497428    2707 retry.go:31] will retry after 239.828019ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:03.797302    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:03.797314    2707 retry.go:31] will retry after 809.802097ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:04.668564    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:04.668600    2707 provision.go:87] duration metric: took 3.560242209s to configureAuth
	W1216 12:16:04.668603    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:04.668608    2707 retry.go:31] will retry after 78.751763ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:04.749377    2707 provision.go:84] configureAuth start
	I1216 12:16:04.749386    2707 provision.go:143] copyHostCerts
	I1216 12:16:04.749408    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:04.749451    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:16:04.749457    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:04.749624    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:16:04.749839    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:04.749858    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:16:04.749861    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:04.749905    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:16:04.750005    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:04.750025    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:16:04.750028    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:04.750066    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:16:04.750165    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:16:05.059058    2707 provision.go:177] copyRemoteCerts
	I1216 12:16:05.059107    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:16:05.059115    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:05.113401    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:05.113415    2707 retry.go:31] will retry after 315.387421ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:05.486109    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:05.486119    2707 retry.go:31] will retry after 289.558756ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:05.838227    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:05.838238    2707 retry.go:31] will retry after 495.628988ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:06.395984    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:06.396015    2707 retry.go:31] will retry after 204.646698ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:06.602689    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:06.660062    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:06.660073    2707 retry.go:31] will retry after 232.534574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:06.953089    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:06.953099    2707 retry.go:31] will retry after 466.346456ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:07.475637    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:07.475649    2707 retry.go:31] will retry after 821.800771ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:08.357362    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:08.357398    2707 provision.go:87] duration metric: took 3.608016834s to configureAuth
	W1216 12:16:08.357405    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:08.357412    2707 retry.go:31] will retry after 148.018553ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:08.507451    2707 provision.go:84] configureAuth start
	I1216 12:16:08.507466    2707 provision.go:143] copyHostCerts
	I1216 12:16:08.507502    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:08.507545    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:16:08.507552    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:08.507697    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:16:08.507902    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:08.507920    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:16:08.507924    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:08.507973    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:16:08.508074    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:08.508092    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:16:08.508095    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:08.508140    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:16:08.508234    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:16:08.570330    2707 provision.go:177] copyRemoteCerts
	I1216 12:16:08.570368    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:16:08.570378    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:08.628202    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:08.628213    2707 retry.go:31] will retry after 336.107163ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:09.020950    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:09.020963    2707 retry.go:31] will retry after 203.524538ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:09.284926    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:09.284938    2707 retry.go:31] will retry after 606.432436ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:09.952747    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:09.952782    2707 retry.go:31] will retry after 170.236124ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:10.125044    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:10.178259    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:10.178271    2707 retry.go:31] will retry after 368.472695ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:10.605574    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:10.605585    2707 retry.go:31] will retry after 407.444499ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:11.073829    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:11.073840    2707 retry.go:31] will retry after 806.666621ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:11.943237    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:11.943274    2707 provision.go:87] duration metric: took 3.435811625s to configureAuth
	W1216 12:16:11.943277    2707 buildroot.go:177] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:11.943284    2707 retry.go:31] will retry after 238.20758ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:12.183514    2707 provision.go:84] configureAuth start
	I1216 12:16:12.183528    2707 provision.go:143] copyHostCerts
	I1216 12:16:12.183563    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:12.183623    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:16:12.183628    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:16:12.184359    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:16:12.184529    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:12.184561    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:16:12.184564    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:16:12.184631    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:16:12.184727    2707 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:12.184756    2707 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:16:12.184759    2707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:16:12.184818    2707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:16:12.184934    2707 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.ha-922000 san=[ 127.0.0.1 ha-922000 localhost minikube]
	I1216 12:16:12.213019    2707 provision.go:177] copyRemoteCerts
	I1216 12:16:12.213053    2707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:16:12.213061    2707 sshutil.go:53] new ssh client: &{IP: Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/ha-922000/id_rsa Username:docker}
	W1216 12:16:12.268734    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:12.268745    2707 retry.go:31] will retry after 342.10386ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:12.670872    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:12.670885    2707 retry.go:31] will retry after 453.547778ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W1216 12:16:13.186624    2707 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1216 12:16:13.186635    2707 retry.go:31] will retry after 822.102726ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-922000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-922000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-922000: context deadline exceeded (1.583µs)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-922000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-922000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-922000 -n ha-922000: exit status 7 (39.652792ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 12:16:13.852280    4890 status.go:393] failed to get driver ip: parsing IP: 
	E1216 12:16:13.852287    4890 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-922000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (1473.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (250.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-304000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1216 12:18:28.316190    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:20:18.617820    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-304000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (4m10.364402083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f672611e-e85b-4538-b2ab-38fbbf81f5a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa284d29-cdac-4102-95d6-a8495e7686c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"7494eaf2-1b6c-4576-9404-07eb318fe505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig"}}
	{"specversion":"1.0","id":"39d2e1fc-1df5-4555-955d-ec631bde42a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a7788fa4-2641-4fa8-8418-450ee1aebea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8967eb64-e785-4464-84b7-83474c84d265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube"}}
	{"specversion":"1.0","id":"c906977b-2409-4815-acf5-ab1bcc6030f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb5edd25-52e1-4624-81b7-24e79afaa7bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2d59618-6060-4d59-9ae0-dc427ad27bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"51488400-d54d-4c8f-9368-50f718c55cd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-304000\" primary control-plane node in \"json-output-304000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"03934f5b-86c2-4d88-a28b-7da97b4269a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"0487340c-8eef-4528-be7d-3a14dfe05e32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"}}
	{"specversion":"1.0","id":"972cbc84-6437-477c-b5c5-a6b28cde409e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Successfully unblocked bootpd process from firewall, retrying"}}
	{"specversion":"1.0","id":"cdd4b5b8-aa7f-4474-a935-18f48fc09dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-304000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e9ef9c5-9688-46d4-8184-0a93b1afb9cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 06:2e:99:17:c7:2b"}}
	{"specversion":"1.0","id":"6281de34-d24d-4fd4-8bb8-7b2aa4a5750c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a86cf8a-78e2-47d8-8e68-ebc68d19152f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"}}
	{"specversion":"1.0","id":"61a4d876-71bb-4765-b562-604929ebcfb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Successfully unblocked bootpd process from firewall, retrying"}}
	{"specversion":"1.0","id":"a7047ccb-48c0-443c-8d39-61931bee8cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-304000\" may fix it: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5"}}
	{"specversion":"1.0","id":"cd028b5b-7d2c-4736-9c2a-8d73822678be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"5b3c58ac-a145-434e-a42c-b59778be9738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-304000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
--- FAIL: TestJSONOutput/start/Command (250.36s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-304000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f672611e-e85b-4538-b2ab-38fbbf81f5a7
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: fa284d29-cdac-4102-95d6-a8495e7686c8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=20091"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7494eaf2-1b6c-4576-9404-07eb318fe505
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 39d2e1fc-1df5-4555-955d-ec631bde42a6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a7788fa4-2641-4fa8-8418-450ee1aebea3
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 8967eb64-e785-4464-84b7-83474c84d265
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c906977b-2409-4815-acf5-ab1bcc6030f1
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fb5edd25-52e1-4624-81b7-24e79afaa7bf
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e2d59618-6060-4d59-9ae0-dc427ad27bd1
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51488400-d54d-4c8f-9368-50f718c55cd1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-304000\" primary control-plane node in \"json-output-304000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 03934f5b-86c2-4d88-a28b-7da97b4269a6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0487340c-8eef-4528-be7d-3a14dfe05e32
datacontenttype: application/json
Data,
{
"message": "Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 972cbc84-6437-477c-b5c5-a6b28cde409e
datacontenttype: application/json
Data,
{
"message": "Successfully unblocked bootpd process from firewall, retrying"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: cdd4b5b8-aa7f-4474-a935-18f48fc09dcc
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-304000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7e9ef9c5-9688-46d4-8184-0a93b1afb9cf
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 06:2e:99:17:c7:2b"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6281de34-d24d-4fd4-8bb8-7b2aa4a5750c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5a86cf8a-78e2-47d8-8e68-ebc68d19152f
datacontenttype: application/json
Data,
{
"message": "Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 61a4d876-71bb-4765-b562-604929ebcfb0
datacontenttype: application/json
Data,
{
"message": "Successfully unblocked bootpd process from firewall, retrying"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a7047ccb-48c0-443c-8d39-61931bee8cb9
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-304000\" may fix it: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: cd028b5b-7d2c-4736-9c2a-8d73822678be
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "error provisioning guest: Failed to start host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5",
"name": "GUEST_PROVISION",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5b3c58ac-a145-434e-a42c-b59778be9738
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f672611e-e85b-4538-b2ab-38fbbf81f5a7
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: fa284d29-cdac-4102-95d6-a8495e7686c8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=20091"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7494eaf2-1b6c-4576-9404-07eb318fe505
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 39d2e1fc-1df5-4555-955d-ec631bde42a6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a7788fa4-2641-4fa8-8418-450ee1aebea3
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 8967eb64-e785-4464-84b7-83474c84d265
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: c906977b-2409-4815-acf5-ab1bcc6030f1
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fb5edd25-52e1-4624-81b7-24e79afaa7bf
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e2d59618-6060-4d59-9ae0-dc427ad27bd1
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51488400-d54d-4c8f-9368-50f718c55cd1
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-304000\" primary control-plane node in \"json-output-304000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 03934f5b-86c2-4d88-a28b-7da97b4269a6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0487340c-8eef-4528-be7d-3a14dfe05e32
datacontenttype: application/json
Data,
{
"message": "Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 972cbc84-6437-477c-b5c5-a6b28cde409e
datacontenttype: application/json
Data,
{
"message": "Successfully unblocked bootpd process from firewall, retrying"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: cdd4b5b8-aa7f-4474-a935-18f48fc09dcc
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-304000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 7e9ef9c5-9688-46d4-8184-0a93b1afb9cf
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 06:2e:99:17:c7:2b"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6281de34-d24d-4fd4-8bb8-7b2aa4a5750c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5a86cf8a-78e2-47d8-8e68-ebc68d19152f
datacontenttype: application/json
Data,
{
"message": "Your firewall is blocking bootpd which is required for this configuration. The following commands will be executed to unblock bootpd:\n\n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /usr/libexec/bootpd \n    $ sudo /usr/libexec/ApplicationFirewall/socketfilterfw --unblock /usr/libexec/bootpd"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 61a4d876-71bb-4765-b562-604929ebcfb0
datacontenttype: application/json
Data,
{
"message": "Successfully unblocked bootpd process from firewall, retrying"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a7047ccb-48c0-443c-8d39-61931bee8cb9
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-304000\" may fix it: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: cd028b5b-7d2c-4736-9c2a-8d73822678be
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "error provisioning guest: Failed to start host: creating host: create: creating: ip not found: failed to get IP address: could not find an IP address for 32:4c:53:59:c1:d5",
"name": "GUEST_PROVISION",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5b3c58ac-a145-434e-a42c-b59778be9738
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-304000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-304000 --output=json --user=testUser: exit status 50 (53.995791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"406d83c3-14b4-4561-a790-6e4cb9d45435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-304000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-304000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-304000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-304000 --output=json --user=testUser: exit status 50 (60.8885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-304000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-304000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (184.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-304000 --output=json --user=testUser
E1216 12:23:21.722426    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:23:28.356958    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p json-output-304000 --output=json --user=testUser: exit status 82 (3m4.066475916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdbc3b06-5843-4be1-be43-da72476dd7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-304000\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"13c47d15-f217-43e1-b4af-0f0dc8df5d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"Unable to stop VM: Temporary Error: stop: Maximum number of retries (60) exceeded","name":"GUEST_STOP_TIMEOUT","url":""}}
	{"specversion":"1.0","id":"83e49e4e-56e9-4623-a044-f228ebbd6daa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                       │\n│    If the above advice does not help, please let us know:                                                             │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                           │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │\n│    Please also attach the following file to the GitHub issue:                                                         │\n│    - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │\n│                                                                                                                       │\n╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 stop -p json-output-304000 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (184.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-472000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-472000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.093623459s)

                                                
                                                
-- stdout --
	* [mount-start-1-472000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-472000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-472000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-472000 -n mount-start-1-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-472000 -n mount-start-1-472000: exit status 7 (74.676333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-148000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-148000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.974197s)

                                                
                                                
-- stdout --
	* [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-148000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:25:31.669265    5216 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:25:31.669435    5216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:25:31.669438    5216 out.go:358] Setting ErrFile to fd 2...
	I1216 12:25:31.669440    5216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:25:31.669573    5216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:25:31.670681    5216 out.go:352] Setting JSON to false
	I1216 12:25:31.688564    5216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3302,"bootTime":1734377429,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:25:31.688665    5216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:25:31.694489    5216 out.go:177] * [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:25:31.703391    5216 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:25:31.703427    5216 notify.go:220] Checking for updates...
	I1216 12:25:31.712310    5216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:25:31.715320    5216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:25:31.718288    5216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:25:31.721298    5216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:25:31.724305    5216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:25:31.725957    5216 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:25:31.730295    5216 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:25:31.741224    5216 start.go:297] selected driver: qemu2
	I1216 12:25:31.741232    5216 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:25:31.741242    5216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:25:31.743844    5216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:25:31.748282    5216 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:25:31.751413    5216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:25:31.751435    5216 cni.go:84] Creating CNI manager for ""
	I1216 12:25:31.751457    5216 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1216 12:25:31.751461    5216 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 12:25:31.751497    5216 start.go:340] cluster config:
	{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:25:31.756456    5216 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:25:31.764258    5216 out.go:177] * Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	I1216 12:25:31.768098    5216 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:25:31.768117    5216 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:25:31.768133    5216 cache.go:56] Caching tarball of preloaded images
	I1216 12:25:31.768228    5216 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:25:31.768234    5216 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:25:31.768470    5216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/multinode-148000/config.json ...
	I1216 12:25:31.768481    5216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/multinode-148000/config.json: {Name:mk871f6bb5152b844a1fcb8e554315b550f1d7f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:25:31.768970    5216 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:25:31.769022    5216 start.go:364] duration metric: took 45.917µs to acquireMachinesLock for "multinode-148000"
	I1216 12:25:31.769036    5216 start.go:93] Provisioning new machine with config: &{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:25:31.769076    5216 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:25:31.773400    5216 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:25:31.792015    5216 start.go:159] libmachine.API.Create for "multinode-148000" (driver="qemu2")
	I1216 12:25:31.792048    5216 client.go:168] LocalClient.Create starting
	I1216 12:25:31.792125    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:25:31.792163    5216 main.go:141] libmachine: Decoding PEM data...
	I1216 12:25:31.792177    5216 main.go:141] libmachine: Parsing certificate...
	I1216 12:25:31.792219    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:25:31.792249    5216 main.go:141] libmachine: Decoding PEM data...
	I1216 12:25:31.792261    5216 main.go:141] libmachine: Parsing certificate...
	I1216 12:25:31.792656    5216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:25:31.952251    5216 main.go:141] libmachine: Creating SSH key...
	I1216 12:25:32.110718    5216 main.go:141] libmachine: Creating Disk image...
	I1216 12:25:32.110727    5216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:25:32.110953    5216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:32.121042    5216 main.go:141] libmachine: STDOUT: 
	I1216 12:25:32.121070    5216 main.go:141] libmachine: STDERR: 
	I1216 12:25:32.121127    5216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2 +20000M
	I1216 12:25:32.129691    5216 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:25:32.129709    5216 main.go:141] libmachine: STDERR: 
	I1216 12:25:32.129730    5216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:32.129737    5216 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:25:32.129749    5216 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:25:32.129780    5216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:45:67:a3:34:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:32.131602    5216 main.go:141] libmachine: STDOUT: 
	I1216 12:25:32.131618    5216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:25:32.131637    5216 client.go:171] duration metric: took 339.580417ms to LocalClient.Create
	I1216 12:25:34.133831    5216 start.go:128] duration metric: took 2.364714s to createHost
	I1216 12:25:34.133890    5216 start.go:83] releasing machines lock for "multinode-148000", held for 2.364838333s
	W1216 12:25:34.133927    5216 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:25:34.141317    5216 out.go:177] * Deleting "multinode-148000" in qemu2 ...
	W1216 12:25:34.176822    5216 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:25:34.176842    5216 start.go:729] Will try again in 5 seconds ...
	I1216 12:25:39.179127    5216 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:25:39.179665    5216 start.go:364] duration metric: took 417.792µs to acquireMachinesLock for "multinode-148000"
	I1216 12:25:39.179792    5216 start.go:93] Provisioning new machine with config: &{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:25:39.180098    5216 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:25:39.197682    5216 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:25:39.247476    5216 start.go:159] libmachine.API.Create for "multinode-148000" (driver="qemu2")
	I1216 12:25:39.247534    5216 client.go:168] LocalClient.Create starting
	I1216 12:25:39.247674    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:25:39.247763    5216 main.go:141] libmachine: Decoding PEM data...
	I1216 12:25:39.247779    5216 main.go:141] libmachine: Parsing certificate...
	I1216 12:25:39.247842    5216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:25:39.247907    5216 main.go:141] libmachine: Decoding PEM data...
	I1216 12:25:39.247924    5216 main.go:141] libmachine: Parsing certificate...
	I1216 12:25:39.248595    5216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:25:39.420003    5216 main.go:141] libmachine: Creating SSH key...
	I1216 12:25:39.540190    5216 main.go:141] libmachine: Creating Disk image...
	I1216 12:25:39.540199    5216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:25:39.540421    5216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:39.550183    5216 main.go:141] libmachine: STDOUT: 
	I1216 12:25:39.550205    5216 main.go:141] libmachine: STDERR: 
	I1216 12:25:39.550262    5216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2 +20000M
	I1216 12:25:39.558614    5216 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:25:39.558663    5216 main.go:141] libmachine: STDERR: 
	I1216 12:25:39.558677    5216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:39.558682    5216 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:25:39.558691    5216 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:25:39.558730    5216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:83:80:ac:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:25:39.560529    5216 main.go:141] libmachine: STDOUT: 
	I1216 12:25:39.560564    5216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:25:39.560578    5216 client.go:171] duration metric: took 313.036ms to LocalClient.Create
	I1216 12:25:41.562736    5216 start.go:128] duration metric: took 2.382597041s to createHost
	I1216 12:25:41.562772    5216 start.go:83] releasing machines lock for "multinode-148000", held for 2.383049167s
	W1216 12:25:41.563064    5216 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:25:41.575538    5216 out.go:201] 
	W1216 12:25:41.579695    5216 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:25:41.579714    5216 out.go:270] * 
	* 
	W1216 12:25:41.581906    5216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:25:41.596675    5216 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-148000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (72.499083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (110.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.508375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-148000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- rollout status deployment/busybox: exit status 1 (63.284584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (63.308833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:41.942913    1494 retry.go:31] will retry after 1.472663831s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.794959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:43.526714    1494 retry.go:31] will retry after 1.08096892s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.649667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:44.717716    1494 retry.go:31] will retry after 2.318889487s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.209958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:47.146244    1494 retry.go:31] will retry after 5.056136123s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.691208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:52.311535    1494 retry.go:31] will retry after 3.878024046s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.497916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:25:56.300541    1494 retry.go:31] will retry after 4.36845625s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.253166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:26:00.781777    1494 retry.go:31] will retry after 8.025782539s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.333416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:26:08.919369    1494 retry.go:31] will retry after 21.355955053s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.550291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:26:30.385813    1494 retry.go:31] will retry after 37.3267649s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.913791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1216 12:27:07.820243    1494 retry.go:31] will retry after 24.145629914s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.605584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.472042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.246292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.499625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.870792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.736416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (110.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-148000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.367208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.599125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-148000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-148000 -v 3 --alsologtostderr: exit status 83 (48.395458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-148000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:32.490840    5329 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:32.491219    5329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.491223    5329 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:32.491226    5329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.491395    5329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:32.491635    5329 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:32.491836    5329 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:32.497484    5329 out.go:177] * The control-plane node multinode-148000 host is not running: state=Stopped
	I1216 12:27:32.501335    5329 out.go:177]   To start a cluster, run: "minikube start -p multinode-148000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-148000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.157084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-148000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-148000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (34.444875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-148000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-148000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-148000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.974791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-148000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-148000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-148000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.32.0\",\"ClusterName\":\"multinode-148000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.32.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (33.558916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status --output json --alsologtostderr: exit status 7 (33.244708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-148000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:32.727682    5343 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:32.727855    5343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.727859    5343 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:32.727861    5343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.727984    5343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:32.728103    5343 out.go:352] Setting JSON to true
	I1216 12:27:32.728113    5343 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:32.728179    5343 notify.go:220] Checking for updates...
	I1216 12:27:32.728333    5343 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:32.728344    5343 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:32.728576    5343 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:32.728580    5343 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:32.728582    5343 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-148000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (33.79625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 node stop m03: exit status 85 (53.248166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-148000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status: exit status 7 (33.7605ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr: exit status 7 (34.8195ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:32.884213    5351 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:32.884412    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.884415    5351 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:32.884417    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.884552    5351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:32.884670    5351 out.go:352] Setting JSON to false
	I1216 12:27:32.884684    5351 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:32.884726    5351 notify.go:220] Checking for updates...
	I1216 12:27:32.884893    5351 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:32.884900    5351 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:32.885143    5351 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:32.885146    5351 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:32.885150    5351 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr": multinode-148000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.786084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.116084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:32.953524    5355 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:32.953812    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.953815    5355 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:32.953817    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:32.953955    5355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:32.954198    5355 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:32.954391    5355 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:32.958386    5355 out.go:201] 
	W1216 12:27:32.961445    5355 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1216 12:27:32.961454    5355 out.go:270] * 
	* 
	W1216 12:27:32.962891    5355 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:27:32.967346    5355 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1216 12:27:32.953524    5355 out.go:345] Setting OutFile to fd 1 ...
I1216 12:27:32.953812    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 12:27:32.953815    5355 out.go:358] Setting ErrFile to fd 2...
I1216 12:27:32.953817    5355 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 12:27:32.953955    5355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 12:27:32.954198    5355 mustload.go:65] Loading cluster: multinode-148000
I1216 12:27:32.954391    5355 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 12:27:32.958386    5355 out.go:201] 
W1216 12:27:32.961445    5355 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1216 12:27:32.961454    5355 out.go:270] * 
* 
W1216 12:27:32.962891    5355 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 12:27:32.967346    5355 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-148000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (35.0555ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:33.005724    5357 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:33.005922    5357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:33.005925    5357 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:33.005928    5357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:33.006054    5357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:33.006172    5357 out.go:352] Setting JSON to false
	I1216 12:27:33.006184    5357 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:33.006233    5357 notify.go:220] Checking for updates...
	I1216 12:27:33.006392    5357 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:33.006400    5357 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:33.006665    5357 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:33.006669    5357 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:33.006671    5357 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:33.007539    1494 retry.go:31] will retry after 1.338549161s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (77.789625ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:34.424131    5359 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:34.424343    5359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:34.424348    5359 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:34.424351    5359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:34.424495    5359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:34.424637    5359 out.go:352] Setting JSON to false
	I1216 12:27:34.424649    5359 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:34.424678    5359 notify.go:220] Checking for updates...
	I1216 12:27:34.424899    5359 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:34.424907    5359 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:34.425208    5359 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:34.425212    5359 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:34.425215    5359 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:34.426274    1494 retry.go:31] will retry after 1.01857186s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (79.065ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:35.524106    5361 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:35.524313    5361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:35.524317    5361 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:35.524321    5361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:35.524492    5361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:35.524629    5361 out.go:352] Setting JSON to false
	I1216 12:27:35.524641    5361 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:35.524684    5361 notify.go:220] Checking for updates...
	I1216 12:27:35.524907    5361 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:35.524915    5361 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:35.525224    5361 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:35.525229    5361 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:35.525231    5361 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:35.526317    1494 retry.go:31] will retry after 1.460868538s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (80.330959ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:37.067705    5365 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:37.067970    5365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:37.067974    5365 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:37.067978    5365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:37.068142    5365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:37.068296    5365 out.go:352] Setting JSON to false
	I1216 12:27:37.068309    5365 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:37.068349    5365 notify.go:220] Checking for updates...
	I1216 12:27:37.068597    5365 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:37.068605    5365 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:37.068928    5365 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:37.068933    5365 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:37.068935    5365 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:37.070049    1494 retry.go:31] will retry after 3.311749253s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (78.663625ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:40.460733    5367 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:40.460958    5367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:40.460962    5367 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:40.460965    5367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:40.461127    5367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:40.461293    5367 out.go:352] Setting JSON to false
	I1216 12:27:40.461306    5367 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:40.461344    5367 notify.go:220] Checking for updates...
	I1216 12:27:40.461561    5367 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:40.461568    5367 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:40.461870    5367 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:40.461874    5367 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:40.461876    5367 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:40.462907    1494 retry.go:31] will retry after 5.479723172s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (77.198125ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:46.020109    5371 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:46.020356    5371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:46.020361    5371 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:46.020364    5371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:46.020535    5371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:46.020695    5371 out.go:352] Setting JSON to false
	I1216 12:27:46.020707    5371 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:46.020742    5371 notify.go:220] Checking for updates...
	I1216 12:27:46.020977    5371 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:46.020985    5371 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:46.021298    5371 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:46.021303    5371 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:46.021306    5371 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:46.022383    1494 retry.go:31] will retry after 6.470831202s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (78.839834ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:27:52.572507    5375 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:27:52.572718    5375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:52.572721    5375 out.go:358] Setting ErrFile to fd 2...
	I1216 12:27:52.572724    5375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:27:52.572873    5375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:27:52.573013    5375 out.go:352] Setting JSON to false
	I1216 12:27:52.573024    5375 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:27:52.573064    5375 notify.go:220] Checking for updates...
	I1216 12:27:52.573276    5375 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:27:52.573284    5375 status.go:174] checking status of multinode-148000 ...
	I1216 12:27:52.573581    5375 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:27:52.573586    5375 status.go:384] host is not running, skipping remaining checks
	I1216 12:27:52.573588    5375 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:27:52.574648    1494 retry.go:31] will retry after 11.260378043s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (78.663125ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:03.914055    5377 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:03.914322    5377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:03.914326    5377 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:03.914330    5377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:03.914482    5377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:03.914652    5377 out.go:352] Setting JSON to false
	I1216 12:28:03.914666    5377 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:28:03.914708    5377 notify.go:220] Checking for updates...
	I1216 12:28:03.914914    5377 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:03.914922    5377 status.go:174] checking status of multinode-148000 ...
	I1216 12:28:03.915238    5377 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:28:03.915242    5377 status.go:384] host is not running, skipping remaining checks
	I1216 12:28:03.915245    5377 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1216 12:28:03.916311    1494 retry.go:31] will retry after 18.577140405s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr: exit status 7 (77.167208ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:22.571063    5383 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:22.571286    5383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:22.571290    5383 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:22.571293    5383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:22.571469    5383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:22.571641    5383 out.go:352] Setting JSON to false
	I1216 12:28:22.571653    5383 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:28:22.571692    5383 notify.go:220] Checking for updates...
	I1216 12:28:22.571923    5383 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:22.571931    5383 status.go:174] checking status of multinode-148000 ...
	I1216 12:28:22.572252    5383 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:28:22.572257    5383 status.go:384] host is not running, skipping remaining checks
	I1216 12:28:22.572260    5383 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-148000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (36.057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-148000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-148000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-148000: (3.450584333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-148000 --wait=true -v=8 --alsologtostderr
E1216 12:28:28.360237    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-148000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219413125s)

                                                
                                                
-- stdout --
	* [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	* Restarting existing qemu2 VM for "multinode-148000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-148000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:26.160858    5407 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:26.161034    5407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:26.161038    5407 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:26.161040    5407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:26.161209    5407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:26.162466    5407 out.go:352] Setting JSON to false
	I1216 12:28:26.181947    5407 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3477,"bootTime":1734377429,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:28:26.182021    5407 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:28:26.186350    5407 out.go:177] * [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:28:26.193542    5407 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:28:26.193553    5407 notify.go:220] Checking for updates...
	I1216 12:28:26.200476    5407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:28:26.203448    5407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:28:26.206351    5407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:28:26.209437    5407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:28:26.212443    5407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:28:26.214077    5407 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:26.214120    5407 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:28:26.217439    5407 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:28:26.224264    5407 start.go:297] selected driver: qemu2
	I1216 12:28:26.224270    5407 start.go:901] validating driver "qemu2" against &{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:28:26.224312    5407 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:28:26.226782    5407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:28:26.226811    5407 cni.go:84] Creating CNI manager for ""
	I1216 12:28:26.226832    5407 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 12:28:26.226884    5407 start.go:340] cluster config:
	{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:28:26.231293    5407 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:28:26.239446    5407 out.go:177] * Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	I1216 12:28:26.243405    5407 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:28:26.243419    5407 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:28:26.243427    5407 cache.go:56] Caching tarball of preloaded images
	I1216 12:28:26.243494    5407 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:28:26.243507    5407 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:28:26.243559    5407 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/multinode-148000/config.json ...
	I1216 12:28:26.244033    5407 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:28:26.244082    5407 start.go:364] duration metric: took 42.875µs to acquireMachinesLock for "multinode-148000"
	I1216 12:28:26.244091    5407 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:28:26.244096    5407 fix.go:54] fixHost starting: 
	I1216 12:28:26.244214    5407 fix.go:112] recreateIfNeeded on multinode-148000: state=Stopped err=<nil>
	W1216 12:28:26.244225    5407 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:28:26.247459    5407 out.go:177] * Restarting existing qemu2 VM for "multinode-148000" ...
	I1216 12:28:26.251407    5407 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:28:26.251451    5407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:83:80:ac:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:28:26.253668    5407 main.go:141] libmachine: STDOUT: 
	I1216 12:28:26.253688    5407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:28:26.253718    5407 fix.go:56] duration metric: took 9.621458ms for fixHost
	I1216 12:28:26.253723    5407 start.go:83] releasing machines lock for "multinode-148000", held for 9.636542ms
	W1216 12:28:26.253727    5407 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:28:26.253775    5407 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:28:26.253779    5407 start.go:729] Will try again in 5 seconds ...
	I1216 12:28:31.256025    5407 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:28:31.256457    5407 start.go:364] duration metric: took 327.5µs to acquireMachinesLock for "multinode-148000"
	I1216 12:28:31.256598    5407 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:28:31.256619    5407 fix.go:54] fixHost starting: 
	I1216 12:28:31.257393    5407 fix.go:112] recreateIfNeeded on multinode-148000: state=Stopped err=<nil>
	W1216 12:28:31.257419    5407 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:28:31.261875    5407 out.go:177] * Restarting existing qemu2 VM for "multinode-148000" ...
	I1216 12:28:31.268754    5407 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:28:31.269006    5407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:83:80:ac:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:28:31.278369    5407 main.go:141] libmachine: STDOUT: 
	I1216 12:28:31.278437    5407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:28:31.278536    5407 fix.go:56] duration metric: took 21.917584ms for fixHost
	I1216 12:28:31.278551    5407 start.go:83] releasing machines lock for "multinode-148000", held for 22.046959ms
	W1216 12:28:31.278723    5407 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:28:31.285870    5407 out.go:201] 
	W1216 12:28:31.289908    5407 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:28:31.289946    5407 out.go:270] * 
	* 
	W1216 12:28:31.292281    5407 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:28:31.301767    5407 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-148000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-148000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (36.698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 node delete m03: exit status 83 (44.62125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-148000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-148000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr: exit status 7 (34.556208ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:31.500417    5423 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:31.500607    5423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:31.500610    5423 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:31.500612    5423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:31.500732    5423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:31.500864    5423 out.go:352] Setting JSON to false
	I1216 12:28:31.500874    5423 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:28:31.500930    5423 notify.go:220] Checking for updates...
	I1216 12:28:31.501086    5423 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:31.501093    5423 status.go:174] checking status of multinode-148000 ...
	I1216 12:28:31.501338    5423 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:28:31.501341    5423 status.go:384] host is not running, skipping remaining checks
	I1216 12:28:31.501343    5423 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (34.403125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-148000 stop: (3.610489s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status: exit status 7 (71.827083ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr: exit status 7 (36.737167ms)

                                                
                                                
-- stdout --
	multinode-148000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:35.254587    5450 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:35.254765    5450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:35.254768    5450 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:35.254770    5450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:35.254883    5450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:35.255010    5450 out.go:352] Setting JSON to false
	I1216 12:28:35.255022    5450 mustload.go:65] Loading cluster: multinode-148000
	I1216 12:28:35.255074    5450 notify.go:220] Checking for updates...
	I1216 12:28:35.255249    5450 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:35.255259    5450 status.go:174] checking status of multinode-148000 ...
	I1216 12:28:35.255537    5450 status.go:371] multinode-148000 host status = "Stopped" (err=<nil>)
	I1216 12:28:35.255541    5450 status.go:384] host is not running, skipping remaining checks
	I1216 12:28:35.255543    5450 status.go:176] multinode-148000 status: &{Name:multinode-148000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr": multinode-148000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-148000 status --alsologtostderr": multinode-148000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (35.403042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-148000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-148000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.190582375s)

                                                
                                                
-- stdout --
	* [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	* Restarting existing qemu2 VM for "multinode-148000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-148000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:28:35.324476    5454 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:28:35.324631    5454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:35.324634    5454 out.go:358] Setting ErrFile to fd 2...
	I1216 12:28:35.324637    5454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:28:35.324789    5454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:28:35.325889    5454 out.go:352] Setting JSON to false
	I1216 12:28:35.343391    5454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3486,"bootTime":1734377429,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:28:35.343472    5454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:28:35.348565    5454 out.go:177] * [multinode-148000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:28:35.356519    5454 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:28:35.356591    5454 notify.go:220] Checking for updates...
	I1216 12:28:35.364512    5454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:28:35.367519    5454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:28:35.370558    5454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:28:35.373540    5454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:28:35.376450    5454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:28:35.379817    5454 config.go:182] Loaded profile config "multinode-148000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:28:35.380084    5454 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:28:35.384563    5454 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:28:35.391484    5454 start.go:297] selected driver: qemu2
	I1216 12:28:35.391493    5454 start.go:901] validating driver "qemu2" against &{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:28:35.391570    5454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:28:35.394163    5454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:28:35.394185    5454 cni.go:84] Creating CNI manager for ""
	I1216 12:28:35.394207    5454 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1216 12:28:35.394255    5454 start.go:340] cluster config:
	{Name:multinode-148000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:multinode-148000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:28:35.398657    5454 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:28:35.405544    5454 out.go:177] * Starting "multinode-148000" primary control-plane node in "multinode-148000" cluster
	I1216 12:28:35.409486    5454 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:28:35.409501    5454 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:28:35.409517    5454 cache.go:56] Caching tarball of preloaded images
	I1216 12:28:35.409576    5454 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:28:35.409582    5454 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:28:35.409640    5454 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/multinode-148000/config.json ...
	I1216 12:28:35.410113    5454 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:28:35.410143    5454 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "multinode-148000"
	I1216 12:28:35.410151    5454 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:28:35.410157    5454 fix.go:54] fixHost starting: 
	I1216 12:28:35.410270    5454 fix.go:112] recreateIfNeeded on multinode-148000: state=Stopped err=<nil>
	W1216 12:28:35.410279    5454 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:28:35.414508    5454 out.go:177] * Restarting existing qemu2 VM for "multinode-148000" ...
	I1216 12:28:35.421443    5454 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:28:35.421483    5454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:83:80:ac:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:28:35.423616    5454 main.go:141] libmachine: STDOUT: 
	I1216 12:28:35.423636    5454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:28:35.423666    5454 fix.go:56] duration metric: took 13.508708ms for fixHost
	I1216 12:28:35.423671    5454 start.go:83] releasing machines lock for "multinode-148000", held for 13.523834ms
	W1216 12:28:35.423676    5454 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:28:35.423720    5454 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:28:35.423725    5454 start.go:729] Will try again in 5 seconds ...
	I1216 12:28:40.425924    5454 start.go:360] acquireMachinesLock for multinode-148000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:28:40.426427    5454 start.go:364] duration metric: took 389.875µs to acquireMachinesLock for "multinode-148000"
	I1216 12:28:40.426572    5454 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:28:40.426595    5454 fix.go:54] fixHost starting: 
	I1216 12:28:40.427275    5454 fix.go:112] recreateIfNeeded on multinode-148000: state=Stopped err=<nil>
	W1216 12:28:40.427307    5454 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:28:40.431742    5454 out.go:177] * Restarting existing qemu2 VM for "multinode-148000" ...
	I1216 12:28:40.437038    5454 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:28:40.437281    5454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:83:80:ac:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/multinode-148000/disk.qcow2
	I1216 12:28:40.447033    5454 main.go:141] libmachine: STDOUT: 
	I1216 12:28:40.447112    5454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:28:40.447196    5454 fix.go:56] duration metric: took 20.603791ms for fixHost
	I1216 12:28:40.447218    5454 start.go:83] releasing machines lock for "multinode-148000", held for 20.731458ms
	W1216 12:28:40.447435    5454 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-148000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:28:40.454739    5454 out.go:201] 
	W1216 12:28:40.458783    5454 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:28:40.458818    5454 out.go:270] * 
	* 
	W1216 12:28:40.461308    5454 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:28:40.469633    5454 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-148000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (73.448125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-148000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-148000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-148000-m01 --driver=qemu2 : exit status 80 (10.450656917s)

                                                
                                                
-- stdout --
	* [multinode-148000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-148000-m01" primary control-plane node in "multinode-148000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-148000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-148000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-148000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-148000-m02 --driver=qemu2 : exit status 80 (10.208057208s)

                                                
                                                
-- stdout --
	* [multinode-148000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-148000-m02" primary control-plane node in "multinode-148000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-148000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-148000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-148000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-148000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-148000: exit status 83 (81.518792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-148000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-148000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-148000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-148000 -n multinode-148000: exit status 7 (35.256417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-148000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.90s)

                                                
                                    
x
+
TestPreload (10s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-956000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-956000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.834932708s)

                                                
                                                
-- stdout --
	* [test-preload-956000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-956000" primary control-plane node in "test-preload-956000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-956000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:29:01.611775    5508 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:29:01.611927    5508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:01.611931    5508 out.go:358] Setting ErrFile to fd 2...
	I1216 12:29:01.611933    5508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:29:01.612076    5508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:29:01.613248    5508 out.go:352] Setting JSON to false
	I1216 12:29:01.630989    5508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3512,"bootTime":1734377429,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:29:01.631064    5508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:29:01.636986    5508 out.go:177] * [test-preload-956000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:29:01.643902    5508 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:29:01.643964    5508 notify.go:220] Checking for updates...
	I1216 12:29:01.652951    5508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:29:01.656036    5508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:29:01.659965    5508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:29:01.662996    5508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:29:01.666101    5508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:29:01.669341    5508 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:29:01.669386    5508 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:29:01.673012    5508 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:29:01.679955    5508 start.go:297] selected driver: qemu2
	I1216 12:29:01.679963    5508 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:29:01.679971    5508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:29:01.682615    5508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:29:01.686979    5508 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:29:01.689994    5508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:29:01.690013    5508 cni.go:84] Creating CNI manager for ""
	I1216 12:29:01.690049    5508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:29:01.690053    5508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:29:01.690077    5508 start.go:340] cluster config:
	{Name:test-preload-956000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:29:01.694759    5508 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.701978    5508 out.go:177] * Starting "test-preload-956000" primary control-plane node in "test-preload-956000" cluster
	I1216 12:29:01.705957    5508 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1216 12:29:01.706045    5508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/test-preload-956000/config.json ...
	I1216 12:29:01.706071    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/test-preload-956000/config.json: {Name:mk6009700f05bd2ef2c7ea3573be1ec1bcda15d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:29:01.706078    5508 cache.go:107] acquiring lock: {Name:mk0316f1a272225e081b0f07bb27995f5380f97e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706087    5508 cache.go:107] acquiring lock: {Name:mkde417adcf32f4dddf4d4cbb2289c4a3d9e49f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706167    5508 cache.go:107] acquiring lock: {Name:mka5c76cce1397da319aff0115f11c4021750b7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706272    5508 cache.go:107] acquiring lock: {Name:mk966166485a98167b85c78a5ad8fdccb35b46e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706277    5508 cache.go:107] acquiring lock: {Name:mkb77643d072fe9fb899fc3c5468ab294fc8fa54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706322    5508 cache.go:107] acquiring lock: {Name:mk712a09b8ceb04b92fdf87a5d29ba75f3eb3a47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706329    5508 cache.go:107] acquiring lock: {Name:mk7a082aedaf3680798a446b653a085ba79661e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706365    5508 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 12:29:01.706319    5508 cache.go:107] acquiring lock: {Name:mka7b705796e8f2e1dd2d049156f630c94dd10e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:29:01.706367    5508 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 12:29:01.706729    5508 start.go:360] acquireMachinesLock for test-preload-956000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:01.706818    5508 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 12:29:01.706946    5508 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 12:29:01.706988    5508 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:29:01.706994    5508 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 12:29:01.707018    5508 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:29:01.707063    5508 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:29:01.707071    5508 start.go:364] duration metric: took 327.291µs to acquireMachinesLock for "test-preload-956000"
	I1216 12:29:01.707089    5508 start.go:93] Provisioning new machine with config: &{Name:test-preload-956000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:01.707121    5508 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:01.714960    5508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:29:01.718639    5508 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 12:29:01.719478    5508 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 12:29:01.719695    5508 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 12:29:01.719852    5508 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:29:01.719886    5508 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 12:29:01.719895    5508 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 12:29:01.719957    5508 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:29:01.720287    5508 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:29:01.733972    5508 start.go:159] libmachine.API.Create for "test-preload-956000" (driver="qemu2")
	I1216 12:29:01.733999    5508 client.go:168] LocalClient.Create starting
	I1216 12:29:01.734085    5508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:01.734123    5508 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:01.734136    5508 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:01.734174    5508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:01.734208    5508 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:01.734223    5508 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:01.734611    5508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:01.894469    5508 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:01.994588    5508 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:01.994604    5508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:01.994840    5508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:02.004225    5508 main.go:141] libmachine: STDOUT: 
	I1216 12:29:02.004245    5508 main.go:141] libmachine: STDERR: 
	I1216 12:29:02.004293    5508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2 +20000M
	I1216 12:29:02.013491    5508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:02.013513    5508 main.go:141] libmachine: STDERR: 
	I1216 12:29:02.013525    5508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:02.013529    5508 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:02.013544    5508 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:02.013577    5508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:10:77:0f:4b:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:02.016304    5508 main.go:141] libmachine: STDOUT: 
	I1216 12:29:02.016324    5508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:02.016346    5508 client.go:171] duration metric: took 282.339792ms to LocalClient.Create
	I1216 12:29:02.373799    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1216 12:29:02.385734    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 12:29:02.399059    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1216 12:29:02.527581    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 12:29:02.556814    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1216 12:29:02.556826    5508 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 850.603125ms
	I1216 12:29:02.556835    5508 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1216 12:29:02.569934    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1216 12:29:02.572968    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W1216 12:29:02.615851    5508 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 12:29:02.615876    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W1216 12:29:03.270789    5508 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 12:29:03.270892    5508 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 12:29:03.737071    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 12:29:03.737138    5508 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.031030709s
	I1216 12:29:03.737168    5508 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 12:29:04.016609    5508 start.go:128] duration metric: took 2.309444916s to createHost
	I1216 12:29:04.016650    5508 start.go:83] releasing machines lock for "test-preload-956000", held for 2.309549541s
	W1216 12:29:04.016708    5508 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:04.026143    5508 out.go:177] * Deleting "test-preload-956000" in qemu2 ...
	W1216 12:29:04.049360    5508 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:04.049383    5508 start.go:729] Will try again in 5 seconds ...
	I1216 12:29:04.529046    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1216 12:29:04.529111    5508 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.822755s
	I1216 12:29:04.529142    5508 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1216 12:29:05.264929    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1216 12:29:05.264972    5508 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.558650958s
	I1216 12:29:05.265000    5508 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1216 12:29:06.819624    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1216 12:29:06.819679    5508 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.113401167s
	I1216 12:29:06.819704    5508 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1216 12:29:07.539682    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1216 12:29:07.539729    5508 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.8336095s
	I1216 12:29:07.539753    5508 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1216 12:29:08.142173    5508 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1216 12:29:08.142220    5508 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.436018958s
	I1216 12:29:08.142251    5508 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1216 12:29:09.049750    5508 start.go:360] acquireMachinesLock for test-preload-956000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:29:09.050319    5508 start.go:364] duration metric: took 495.917µs to acquireMachinesLock for "test-preload-956000"
	I1216 12:29:09.050448    5508 start.go:93] Provisioning new machine with config: &{Name:test-preload-956000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:29:09.050691    5508 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:29:09.068909    5508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:29:09.117551    5508 start.go:159] libmachine.API.Create for "test-preload-956000" (driver="qemu2")
	I1216 12:29:09.117621    5508 client.go:168] LocalClient.Create starting
	I1216 12:29:09.117761    5508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:29:09.117841    5508 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:09.117863    5508 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:09.117929    5508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:29:09.117986    5508 main.go:141] libmachine: Decoding PEM data...
	I1216 12:29:09.118000    5508 main.go:141] libmachine: Parsing certificate...
	I1216 12:29:09.118587    5508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:29:09.288818    5508 main.go:141] libmachine: Creating SSH key...
	I1216 12:29:09.341787    5508 main.go:141] libmachine: Creating Disk image...
	I1216 12:29:09.341793    5508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:29:09.342017    5508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:09.352145    5508 main.go:141] libmachine: STDOUT: 
	I1216 12:29:09.352163    5508 main.go:141] libmachine: STDERR: 
	I1216 12:29:09.352232    5508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2 +20000M
	I1216 12:29:09.360972    5508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:29:09.360988    5508 main.go:141] libmachine: STDERR: 
	I1216 12:29:09.361000    5508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:09.361012    5508 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:29:09.361020    5508 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:29:09.361058    5508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a4:ef:b1:1a:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/test-preload-956000/disk.qcow2
	I1216 12:29:09.363000    5508 main.go:141] libmachine: STDOUT: 
	I1216 12:29:09.363013    5508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:29:09.363025    5508 client.go:171] duration metric: took 245.396333ms to LocalClient.Create
	I1216 12:29:11.365356    5508 start.go:128] duration metric: took 2.314569708s to createHost
	I1216 12:29:11.365444    5508 start.go:83] releasing machines lock for "test-preload-956000", held for 2.315078166s
	W1216 12:29:11.365726    5508 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-956000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:29:11.382242    5508 out.go:201] 
	W1216 12:29:11.386332    5508 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:29:11.386358    5508 out.go:270] * 
	* 
	W1216 12:29:11.388851    5508 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:29:11.399229    5508 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-956000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-16 12:29:11.416591 -0800 PST m=+3281.297064167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-956000 -n test-preload-956000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-956000 -n test-preload-956000: exit status 7 (75.464083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-956000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-956000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-956000
--- FAIL: TestPreload (10.00s)

                                                
                                    
x
+
TestScheduledStopUnix (10.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-196000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-196000 --memory=2048 --driver=qemu2 : exit status 80 (10.077978292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-196000" primary control-plane node in "scheduled-stop-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-196000" primary control-plane node in "scheduled-stop-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-16 12:29:21.66108 -0800 PST m=+3291.541467167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-196000 -n scheduled-stop-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-196000 -n scheduled-stop-196000: exit status 7 (74.5195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-196000
--- FAIL: TestScheduledStopUnix (10.24s)

                                                
                                    
x
+
TestSkaffold (12.66s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3712331353 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3712331353 version: (1.018270209s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-867000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-867000 --memory=2600 --driver=qemu2 : exit status 80 (9.973549958s)

                                                
                                                
-- stdout --
	* [skaffold-867000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-867000" primary control-plane node in "skaffold-867000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-867000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-867000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-867000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-867000" primary control-plane node in "skaffold-867000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-867000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-867000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-16 12:29:34.327091 -0800 PST m=+3304.207371709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-867000 -n skaffold-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-867000 -n skaffold-867000: exit status 7 (67.53775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-867000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-867000
--- FAIL: TestSkaffold (12.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (605.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4140547260 start -p running-upgrade-868000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4140547260 start -p running-upgrade-868000 --memory=2200 --vm-driver=qemu2 : (56.566474417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-868000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1216 12:33:11.461968    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-868000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.117266584s)

                                                
                                                
-- stdout --
	* [running-upgrade-868000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-868000" primary control-plane node in "running-upgrade-868000" cluster
	* Updating the running qemu2 "running-upgrade-868000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:31:18.845221    6206 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:31:18.845588    6206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:31:18.845592    6206 out.go:358] Setting ErrFile to fd 2...
	I1216 12:31:18.845595    6206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:31:18.845720    6206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:31:18.846785    6206 out.go:352] Setting JSON to false
	I1216 12:31:18.867269    6206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3649,"bootTime":1734377429,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:31:18.867349    6206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:31:18.872207    6206 out.go:177] * [running-upgrade-868000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:31:18.879186    6206 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:31:18.879262    6206 notify.go:220] Checking for updates...
	I1216 12:31:18.886194    6206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:31:18.890178    6206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:31:18.893227    6206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:31:18.896118    6206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:31:18.899233    6206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:31:18.902552    6206 config.go:182] Loaded profile config "running-upgrade-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:31:18.904176    6206 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 12:31:18.907150    6206 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:31:18.910268    6206 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:31:18.915217    6206 start.go:297] selected driver: qemu2
	I1216 12:31:18.915223    6206 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50805 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:31:18.915297    6206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:31:18.918087    6206 cni.go:84] Creating CNI manager for ""
	I1216 12:31:18.918120    6206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:31:18.918147    6206 start.go:340] cluster config:
	{Name:running-upgrade-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50805 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:31:18.918199    6206 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:31:18.927264    6206 out.go:177] * Starting "running-upgrade-868000" primary control-plane node in "running-upgrade-868000" cluster
	I1216 12:31:18.931201    6206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:31:18.931215    6206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1216 12:31:18.931223    6206 cache.go:56] Caching tarball of preloaded images
	I1216 12:31:18.931292    6206 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:31:18.931297    6206 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1216 12:31:18.931343    6206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/config.json ...
	I1216 12:31:18.931896    6206 start.go:360] acquireMachinesLock for running-upgrade-868000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:31:18.931943    6206 start.go:364] duration metric: took 40.958µs to acquireMachinesLock for "running-upgrade-868000"
	I1216 12:31:18.931951    6206 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:31:18.931957    6206 fix.go:54] fixHost starting: 
	I1216 12:31:18.932680    6206 fix.go:112] recreateIfNeeded on running-upgrade-868000: state=Running err=<nil>
	W1216 12:31:18.932688    6206 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:31:18.937152    6206 out.go:177] * Updating the running qemu2 "running-upgrade-868000" VM ...
	I1216 12:31:18.947224    6206 machine.go:93] provisionDockerMachine start ...
	I1216 12:31:18.947295    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:18.947427    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:18.947432    6206 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 12:31:18.998976    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-868000
	
	I1216 12:31:18.998990    6206 buildroot.go:166] provisioning hostname "running-upgrade-868000"
	I1216 12:31:18.999054    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:18.999169    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:18.999174    6206 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-868000 && echo "running-upgrade-868000" | sudo tee /etc/hostname
	I1216 12:31:19.058471    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-868000
	
	I1216 12:31:19.058534    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:19.058643    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:19.058652    6206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-868000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-868000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-868000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 12:31:19.111811    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:31:19.111825    6206 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20091-990/.minikube CaCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20091-990/.minikube}
	I1216 12:31:19.111833    6206 buildroot.go:174] setting up certificates
	I1216 12:31:19.111837    6206 provision.go:84] configureAuth start
	I1216 12:31:19.111845    6206 provision.go:143] copyHostCerts
	I1216 12:31:19.111903    6206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:31:19.111911    6206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:31:19.112037    6206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:31:19.112228    6206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:31:19.112231    6206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:31:19.112276    6206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:31:19.112419    6206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:31:19.112421    6206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:31:19.112460    6206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:31:19.112562    6206 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-868000 san=[127.0.0.1 localhost minikube running-upgrade-868000]
	I1216 12:31:19.175061    6206 provision.go:177] copyRemoteCerts
	I1216 12:31:19.175116    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:31:19.175125    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:31:19.203866    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 12:31:19.210528    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 12:31:19.217834    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 12:31:19.224953    6206 provision.go:87] duration metric: took 113.104541ms to configureAuth
	I1216 12:31:19.224963    6206 buildroot.go:189] setting minikube options for container-runtime
	I1216 12:31:19.225070    6206 config.go:182] Loaded profile config "running-upgrade-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:31:19.225121    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:19.225211    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:19.225216    6206 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 12:31:19.276566    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1216 12:31:19.276576    6206 buildroot.go:70] root file system type: tmpfs
	I1216 12:31:19.276623    6206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 12:31:19.276690    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:19.276800    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:19.276835    6206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 12:31:19.333185    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 12:31:19.333257    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:19.333368    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:19.333377    6206 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 12:31:19.385560    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:31:19.385570    6206 machine.go:96] duration metric: took 438.335333ms to provisionDockerMachine
	I1216 12:31:19.385577    6206 start.go:293] postStartSetup for "running-upgrade-868000" (driver="qemu2")
	I1216 12:31:19.385583    6206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 12:31:19.385646    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 12:31:19.385655    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:31:19.413723    6206 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 12:31:19.415114    6206 info.go:137] Remote host: Buildroot 2021.02.12
	I1216 12:31:19.415121    6206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/addons for local assets ...
	I1216 12:31:19.415185    6206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/files for local assets ...
	I1216 12:31:19.415272    6206 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem -> 14942.pem in /etc/ssl/certs
	I1216 12:31:19.415374    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 12:31:19.418190    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:31:19.425379    6206 start.go:296] duration metric: took 39.797166ms for postStartSetup
	I1216 12:31:19.425392    6206 fix.go:56] duration metric: took 493.433084ms for fixHost
	I1216 12:31:19.425444    6206 main.go:141] libmachine: Using SSH client type: native
	I1216 12:31:19.425595    6206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032171b0] 0x1032199f0 <nil>  [] 0s} localhost 50773 <nil> <nil>}
	I1216 12:31:19.425602    6206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 12:31:19.477617    6206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381079.308081722
	
	I1216 12:31:19.477627    6206 fix.go:216] guest clock: 1734381079.308081722
	I1216 12:31:19.477631    6206 fix.go:229] Guest: 2024-12-16 12:31:19.308081722 -0800 PST Remote: 2024-12-16 12:31:19.425406 -0800 PST m=+0.602401293 (delta=-117.324278ms)
	I1216 12:31:19.477644    6206 fix.go:200] guest clock delta is within tolerance: -117.324278ms
	I1216 12:31:19.477647    6206 start.go:83] releasing machines lock for "running-upgrade-868000", held for 545.695042ms
	I1216 12:31:19.477720    6206 ssh_runner.go:195] Run: cat /version.json
	I1216 12:31:19.477734    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:31:19.477720    6206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 12:31:19.477767    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	W1216 12:31:19.478231    6206 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50773: connect: connection refused
	I1216 12:31:19.478249    6206 retry.go:31] will retry after 181.300241ms: dial tcp [::1]:50773: connect: connection refused
	W1216 12:31:19.503716    6206 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1216 12:31:19.503769    6206 ssh_runner.go:195] Run: systemctl --version
	I1216 12:31:19.505993    6206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 12:31:19.507652    6206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 12:31:19.507682    6206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 12:31:19.511359    6206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 12:31:19.518950    6206 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 12:31:19.518963    6206 start.go:495] detecting cgroup driver to use...
	I1216 12:31:19.519047    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:31:19.525644    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1216 12:31:19.529097    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 12:31:19.532782    6206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 12:31:19.532818    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 12:31:19.536157    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:31:19.538957    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 12:31:19.543855    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:31:19.547009    6206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 12:31:19.549923    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 12:31:19.553129    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 12:31:19.556043    6206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 12:31:19.558914    6206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 12:31:19.562395    6206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 12:31:19.565224    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:19.664813    6206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 12:31:19.674260    6206 start.go:495] detecting cgroup driver to use...
	I1216 12:31:19.674318    6206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 12:31:19.684027    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:31:19.689663    6206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 12:31:19.699427    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:31:19.741861    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 12:31:19.746716    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:31:19.752210    6206 ssh_runner.go:195] Run: which cri-dockerd
	I1216 12:31:19.753511    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 12:31:19.756450    6206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 12:31:19.761606    6206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 12:31:19.855751    6206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 12:31:19.948770    6206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 12:31:19.948832    6206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 12:31:19.954338    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:20.044063    6206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:31:33.013816    6206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.969628375s)
	I1216 12:31:33.013904    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 12:31:33.021800    6206 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 12:31:33.030272    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:31:33.035380    6206 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 12:31:33.117540    6206 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 12:31:33.208818    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:33.290878    6206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 12:31:33.297280    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:31:33.302414    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:33.364211    6206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 12:31:33.403581    6206 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 12:31:33.403682    6206 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 12:31:33.406005    6206 start.go:563] Will wait 60s for crictl version
	I1216 12:31:33.406054    6206 ssh_runner.go:195] Run: which crictl
	I1216 12:31:33.407468    6206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 12:31:33.420100    6206 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1216 12:31:33.420179    6206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:31:33.433675    6206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:31:33.449896    6206 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1216 12:31:33.450049    6206 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1216 12:31:33.451359    6206 kubeadm.go:883] updating cluster {Name:running-upgrade-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50805 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1216 12:31:33.451399    6206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:31:33.451444    6206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:31:33.462869    6206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:31:33.462881    6206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:31:33.462946    6206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:31:33.466515    6206 ssh_runner.go:195] Run: which lz4
	I1216 12:31:33.467647    6206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 12:31:33.468979    6206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 12:31:33.468990    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1216 12:31:34.464917    6206 docker.go:653] duration metric: took 997.302208ms to copy over tarball
	I1216 12:31:34.465005    6206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 12:31:36.041555    6206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.576524875s)
	I1216 12:31:36.041569    6206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 12:31:36.057331    6206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:31:36.060142    6206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1216 12:31:36.065100    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:36.126569    6206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:31:37.305141    6206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.178546209s)
	I1216 12:31:37.305238    6206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:31:37.316346    6206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:31:37.316357    6206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:31:37.316362    6206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 12:31:37.321774    6206 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:31:37.323859    6206 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:31:37.325431    6206 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:31:37.325453    6206 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:31:37.327293    6206 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:31:37.327302    6206 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:31:37.328833    6206 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:31:37.329007    6206 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:31:37.330026    6206 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:31:37.329996    6206 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:31:37.330953    6206 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 12:31:37.332362    6206 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:31:37.332590    6206 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:31:37.332836    6206 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:31:37.333093    6206 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 12:31:37.334752    6206 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:31:37.803085    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:31:37.815852    6206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1216 12:31:37.815887    6206 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:31:37.815960    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:31:37.826337    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1216 12:31:37.866679    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:31:37.873606    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:31:37.878060    6206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1216 12:31:37.878086    6206 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:31:37.878140    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:31:37.887299    6206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1216 12:31:37.887334    6206 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:31:37.887397    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:31:37.896978    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1216 12:31:37.899007    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1216 12:31:37.949302    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:31:37.959882    6206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1216 12:31:37.959902    6206 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:31:37.959960    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:31:37.972534    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1216 12:31:37.985065    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 12:31:37.994690    6206 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1216 12:31:37.994710    6206 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:31:37.994785    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1216 12:31:38.004825    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 12:31:38.033546    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 12:31:38.044278    6206 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1216 12:31:38.044304    6206 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1216 12:31:38.044363    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1216 12:31:38.054942    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 12:31:38.055073    6206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 12:31:38.056789    6206 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1216 12:31:38.056814    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1216 12:31:38.065256    6206 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 12:31:38.065274    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1216 12:31:38.094396    6206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1216 12:31:38.113553    6206 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 12:31:38.113698    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:31:38.128764    6206 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1216 12:31:38.128788    6206 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:31:38.128860    6206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:31:38.138296    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 12:31:38.138450    6206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:31:38.139997    6206 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1216 12:31:38.140008    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1216 12:31:38.181268    6206 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:31:38.181282    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1216 12:31:38.221040    6206 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 12:31:38.221175    6206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:31:38.250002    6206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 12:31:38.250083    6206 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 12:31:38.250105    6206 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:31:38.250177    6206 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:31:38.345083    6206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 12:31:38.345235    6206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:31:38.347130    6206 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 12:31:38.347155    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 12:31:38.381890    6206 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:31:38.381910    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1216 12:31:38.670096    6206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 12:31:38.670149    6206 cache_images.go:92] duration metric: took 1.353755083s to LoadCachedImages
	W1216 12:31:38.670187    6206 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1216 12:31:38.670194    6206 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1216 12:31:38.670269    6206 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-868000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 12:31:38.670360    6206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 12:31:38.694528    6206 cni.go:84] Creating CNI manager for ""
	I1216 12:31:38.694542    6206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:31:38.694551    6206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 12:31:38.694560    6206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-868000 NodeName:running-upgrade-868000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 12:31:38.694629    6206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-868000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 12:31:38.694708    6206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1216 12:31:38.697797    6206 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 12:31:38.697835    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 12:31:38.701004    6206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1216 12:31:38.706109    6206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 12:31:38.711137    6206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1216 12:31:38.716250    6206 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1216 12:31:38.717596    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:31:38.801677    6206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:31:38.806845    6206 certs.go:68] Setting up /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000 for IP: 10.0.2.15
	I1216 12:31:38.806853    6206 certs.go:194] generating shared ca certs ...
	I1216 12:31:38.806862    6206 certs.go:226] acquiring lock for ca certs: {Name:mkaa7d3f47c3893d22672057b4e8b1df7ff583ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:31:38.807296    6206 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key
	I1216 12:31:38.807552    6206 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key
	I1216 12:31:38.807587    6206 certs.go:256] generating profile certs ...
	I1216 12:31:38.807854    6206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.key
	I1216 12:31:38.807894    6206 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key.256cda61
	I1216 12:31:38.807924    6206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt.256cda61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1216 12:31:38.924294    6206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt.256cda61 ...
	I1216 12:31:38.924307    6206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt.256cda61: {Name:mk07b5054eb5841bfc61d767f20bcc634cb60cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:31:38.924563    6206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key.256cda61 ...
	I1216 12:31:38.924567    6206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key.256cda61: {Name:mkee210cd5532a4ca1ae9a3369a1900a2a3a7151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:31:38.924747    6206 certs.go:381] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt.256cda61 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt
	I1216 12:31:38.924877    6206 certs.go:385] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key.256cda61 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key
	I1216 12:31:38.925053    6206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/proxy-client.key
	I1216 12:31:38.925209    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem (1338 bytes)
	W1216 12:31:38.925243    6206 certs.go:480] ignoring /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494_empty.pem, impossibly tiny 0 bytes
	I1216 12:31:38.925249    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 12:31:38.925282    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem (1082 bytes)
	I1216 12:31:38.925312    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem (1123 bytes)
	I1216 12:31:38.925343    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem (1675 bytes)
	I1216 12:31:38.925406    6206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:31:38.925723    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 12:31:38.933125    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 12:31:38.939725    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 12:31:38.947350    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 12:31:38.954931    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 12:31:38.962003    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 12:31:38.968612    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 12:31:38.975698    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 12:31:38.983208    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem --> /usr/share/ca-certificates/1494.pem (1338 bytes)
	I1216 12:31:38.990508    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /usr/share/ca-certificates/14942.pem (1708 bytes)
	I1216 12:31:38.997538    6206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 12:31:39.004240    6206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 12:31:39.009126    6206 ssh_runner.go:195] Run: openssl version
	I1216 12:31:39.011003    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 12:31:39.014543    6206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:31:39.016091    6206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:31:39.016123    6206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:31:39.017927    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 12:31:39.020598    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1494.pem && ln -fs /usr/share/ca-certificates/1494.pem /etc/ssl/certs/1494.pem"
	I1216 12:31:39.023550    6206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1494.pem
	I1216 12:31:39.025081    6206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/1494.pem
	I1216 12:31:39.025115    6206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1494.pem
	I1216 12:31:39.030989    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1494.pem /etc/ssl/certs/51391683.0"
	I1216 12:31:39.034593    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14942.pem && ln -fs /usr/share/ca-certificates/14942.pem /etc/ssl/certs/14942.pem"
	I1216 12:31:39.037905    6206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14942.pem
	I1216 12:31:39.039321    6206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/14942.pem
	I1216 12:31:39.039346    6206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14942.pem
	I1216 12:31:39.041149    6206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14942.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 12:31:39.043941    6206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 12:31:39.045425    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 12:31:39.047203    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 12:31:39.048974    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 12:31:39.050592    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 12:31:39.052499    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 12:31:39.054139    6206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 12:31:39.055888    6206 kubeadm.go:392] StartCluster: {Name:running-upgrade-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50805 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:31:39.055962    6206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:31:39.066613    6206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 12:31:39.070434    6206 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 12:31:39.070444    6206 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 12:31:39.070482    6206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 12:31:39.073610    6206 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:31:39.073854    6206 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-868000" does not appear in /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:31:39.073899    6206 kubeconfig.go:62] /Users/jenkins/minikube-integration/20091-990/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-868000" cluster setting kubeconfig missing "running-upgrade-868000" context setting]
	I1216 12:31:39.074040    6206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:31:39.074704    6206 kapi.go:59] client config for running-upgrade-868000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104c82f70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:31:39.075054    6206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 12:31:39.078375    6206 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-868000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1216 12:31:39.078382    6206 kubeadm.go:1160] stopping kube-system containers ...
	I1216 12:31:39.078434    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:31:39.089197    6206 docker.go:483] Stopping containers: [da32f2743333 1d0d40f8bc0d 41ccd0aef9a9 c2e997ac0e44 d93c51d09ee4 6e29ba95c43d a100b16165bd fbde9cba9173 b68b6f1c2d05 d96506c2cb1f 64d46781ed55 7f4fe4e9398f 5c703b0416ad eab96b3b43cf 116f95fd2260 492a896eedc6]
	I1216 12:31:39.089275    6206 ssh_runner.go:195] Run: docker stop da32f2743333 1d0d40f8bc0d 41ccd0aef9a9 c2e997ac0e44 d93c51d09ee4 6e29ba95c43d a100b16165bd fbde9cba9173 b68b6f1c2d05 d96506c2cb1f 64d46781ed55 7f4fe4e9398f 5c703b0416ad eab96b3b43cf 116f95fd2260 492a896eedc6
	I1216 12:31:40.100192    6206 ssh_runner.go:235] Completed: docker stop da32f2743333 1d0d40f8bc0d 41ccd0aef9a9 c2e997ac0e44 d93c51d09ee4 6e29ba95c43d a100b16165bd fbde9cba9173 b68b6f1c2d05 d96506c2cb1f 64d46781ed55 7f4fe4e9398f 5c703b0416ad eab96b3b43cf 116f95fd2260 492a896eedc6: (1.010893334s)
	I1216 12:31:40.100291    6206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 12:31:40.177139    6206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:31:40.180572    6206 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Dec 16 20:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec 16 20:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 16 20:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec 16 20:31 /etc/kubernetes/scheduler.conf
	
	I1216 12:31:40.180622    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf
	I1216 12:31:40.183253    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:31:40.183288    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:31:40.186818    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf
	I1216 12:31:40.190231    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:31:40.190263    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:31:40.193691    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf
	I1216 12:31:40.196746    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:31:40.196780    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:31:40.199693    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf
	I1216 12:31:40.202752    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:31:40.202798    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:31:40.206339    6206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:31:40.209643    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:31:40.240727    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:31:40.786032    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:31:40.981886    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:31:41.006271    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:31:41.029942    6206 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:31:41.030032    6206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:31:41.532312    6206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:31:42.032154    6206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:31:42.036556    6206 api_server.go:72] duration metric: took 1.006607291s to wait for apiserver process to appear ...
	I1216 12:31:42.036567    6206 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:31:42.036586    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:31:47.038860    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:31:47.038968    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:31:52.039934    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:31:52.040027    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:31:57.041431    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:31:57.041511    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:02.042896    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:02.043000    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:07.045098    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:07.045190    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:12.047651    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:12.047734    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:17.049892    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:17.050001    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:22.052591    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:22.052677    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:27.055348    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:27.055436    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:32.056577    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:32.056658    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:37.059359    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:37.059411    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:42.061684    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:42.061924    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:32:42.083776    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:32:42.083926    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:32:42.100646    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:32:42.100752    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:32:42.113685    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:32:42.113770    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:32:42.128638    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:32:42.128723    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:32:42.139240    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:32:42.139320    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:32:42.154757    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:32:42.154836    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:32:42.164622    6206 logs.go:282] 0 containers: []
	W1216 12:32:42.164632    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:32:42.164699    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:32:42.175192    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:32:42.175211    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:32:42.175216    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:32:42.190736    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:32:42.190748    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:32:42.202865    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:32:42.202874    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:32:42.223687    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:32:42.223697    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:32:42.228282    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:32:42.228290    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:32:42.243103    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:32:42.243116    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:32:42.261361    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:32:42.261371    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:32:42.275679    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:32:42.275692    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:32:42.289527    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:32:42.289539    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:32:42.306399    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:32:42.306411    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:32:42.323204    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:32:42.323218    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:32:42.334197    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:32:42.334210    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:32:42.346782    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:32:42.346792    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:32:42.384825    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:32:42.384835    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:32:42.458256    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:32:42.458270    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:32:42.470107    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:32:42.470121    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:32:42.482176    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:32:42.482186    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:32:45.011059    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:50.013837    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:50.014100    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:32:50.040996    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:32:50.041105    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:32:50.054930    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:32:50.055020    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:32:50.066528    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:32:50.066595    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:32:50.076817    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:32:50.076897    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:32:50.086846    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:32:50.086916    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:32:50.097460    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:32:50.097542    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:32:50.107691    6206 logs.go:282] 0 containers: []
	W1216 12:32:50.107704    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:32:50.107772    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:32:50.117865    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:32:50.117895    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:32:50.117900    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:32:50.129874    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:32:50.129887    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:32:50.140686    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:32:50.140697    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:32:50.165071    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:32:50.165080    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:32:50.176707    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:32:50.176719    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:32:50.212397    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:32:50.212408    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:32:50.226642    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:32:50.226656    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:32:50.238731    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:32:50.238743    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:32:50.249864    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:32:50.249874    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:32:50.267660    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:32:50.267671    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:32:50.279690    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:32:50.279701    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:32:50.294596    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:32:50.294609    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:32:50.332684    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:32:50.332693    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:32:50.336712    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:32:50.336723    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:32:50.350153    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:32:50.350163    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:32:50.364392    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:32:50.364405    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:32:50.379939    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:32:50.379951    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:32:52.893367    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:32:57.896010    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:32:57.896605    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:32:57.941002    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:32:57.941187    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:32:57.968085    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:32:57.968183    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:32:57.982500    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:32:57.982580    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:32:57.995030    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:32:57.995110    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:32:58.010059    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:32:58.010127    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:32:58.020554    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:32:58.020640    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:32:58.030868    6206 logs.go:282] 0 containers: []
	W1216 12:32:58.030879    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:32:58.030941    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:32:58.043973    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:32:58.043993    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:32:58.044003    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:32:58.056042    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:32:58.056053    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:32:58.067507    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:32:58.067519    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:32:58.085932    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:32:58.085945    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:32:58.100497    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:32:58.100509    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:32:58.115942    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:32:58.115955    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:32:58.130578    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:32:58.130591    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:32:58.142440    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:32:58.142451    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:32:58.168305    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:32:58.168314    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:32:58.181812    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:32:58.181823    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:32:58.194206    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:32:58.194215    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:32:58.232661    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:32:58.232670    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:32:58.244404    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:32:58.244416    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:32:58.259815    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:32:58.259825    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:32:58.271120    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:32:58.271131    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:32:58.283798    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:32:58.283809    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:32:58.288228    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:32:58.288235    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:00.825597    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:05.828464    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:05.828996    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:05.869639    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:05.869803    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:05.891710    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:05.891861    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:05.906907    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:05.906993    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:05.919272    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:05.919355    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:05.929778    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:05.929852    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:05.940191    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:05.940257    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:05.951245    6206 logs.go:282] 0 containers: []
	W1216 12:33:05.951257    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:05.951327    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:05.962179    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:05.962200    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:05.962205    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:05.996664    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:05.996679    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:06.019308    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:06.019322    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:06.038056    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:06.038068    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:06.050670    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:06.050681    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:06.063026    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:06.063036    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:06.080315    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:06.080329    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:06.094345    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:06.094358    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:06.105766    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:06.105781    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:06.146383    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:06.146393    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:06.150819    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:06.150827    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:06.162572    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:06.162583    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:06.176754    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:06.176765    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:06.188250    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:06.188260    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:06.203159    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:06.203172    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:06.217926    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:06.217937    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:06.242724    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:06.242740    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:08.771440    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:13.774339    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:13.774912    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:13.812444    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:13.812610    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:13.834218    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:13.834354    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:13.854066    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:13.854151    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:13.865867    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:13.865947    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:13.876508    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:13.876587    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:13.887068    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:13.887137    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:13.898150    6206 logs.go:282] 0 containers: []
	W1216 12:33:13.898167    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:13.898235    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:13.909318    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:13.909333    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:13.909338    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:13.921688    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:13.921699    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:13.926519    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:13.926523    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:13.941699    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:13.941708    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:13.953455    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:13.953468    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:13.970604    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:13.970613    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:13.981743    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:13.981757    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:14.006276    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:14.006287    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:14.019540    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:14.019554    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:14.033229    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:14.033241    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:14.049036    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:14.049049    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:14.061395    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:14.061409    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:14.073249    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:14.073259    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:14.099354    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:14.099368    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:14.117263    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:14.117276    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:14.129641    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:14.129654    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:14.169216    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:14.169227    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:16.706537    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:21.709478    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:21.710020    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:21.747792    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:21.747951    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:21.768851    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:21.768974    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:21.784142    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:21.784232    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:21.796860    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:21.796932    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:21.808369    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:21.808449    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:21.820272    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:21.820349    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:21.829933    6206 logs.go:282] 0 containers: []
	W1216 12:33:21.829942    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:21.829993    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:21.840476    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:21.840497    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:21.840503    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:21.880148    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:21.880162    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:21.893535    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:21.893549    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:21.898363    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:21.898372    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:21.912390    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:21.912402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:21.926833    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:21.926845    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:21.942270    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:21.942283    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:21.953529    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:21.953541    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:21.967683    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:21.967697    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:21.979920    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:21.979930    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:21.991341    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:21.991352    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:22.008597    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:22.008607    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:22.019917    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:22.019932    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:22.030824    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:22.030834    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:22.067120    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:22.067129    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:22.081065    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:22.081075    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:22.092199    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:22.092210    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:24.618468    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:29.619585    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:29.619996    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:29.655761    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:29.655912    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:29.675473    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:29.675596    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:29.690308    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:29.690401    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:29.709895    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:29.709972    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:29.720818    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:29.720899    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:29.731545    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:29.731624    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:29.742267    6206 logs.go:282] 0 containers: []
	W1216 12:33:29.742282    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:29.742350    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:29.753302    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:29.753320    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:29.753326    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:29.765752    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:29.765766    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:29.780286    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:29.780298    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:29.813945    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:29.813959    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:29.827947    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:29.827961    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:29.839504    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:29.839516    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:29.865137    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:29.865146    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:29.904439    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:29.904450    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:29.908541    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:29.908549    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:29.922794    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:29.922808    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:29.938136    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:29.938149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:29.955715    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:29.955728    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:29.973286    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:29.973298    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:29.989050    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:29.989065    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:30.004095    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:30.004104    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:30.018813    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:30.018823    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:30.030750    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:30.030762    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:32.544692    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:37.546648    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:37.547188    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:37.598981    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:37.599105    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:37.618593    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:37.618680    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:37.631928    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:37.632013    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:37.643763    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:37.643840    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:37.654297    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:37.654372    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:37.669260    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:37.669335    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:37.679574    6206 logs.go:282] 0 containers: []
	W1216 12:33:37.679586    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:37.679641    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:37.690295    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:37.690312    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:37.690317    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:37.704217    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:37.704228    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:37.716218    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:37.716232    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:37.733967    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:37.733978    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:37.767951    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:37.767964    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:37.782855    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:37.782866    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:37.794611    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:37.794625    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:37.819212    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:37.819218    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:37.857970    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:37.857983    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:37.862293    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:37.862300    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:37.879719    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:37.879735    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:37.898247    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:37.898260    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:37.910228    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:37.910237    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:37.923799    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:37.923810    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:37.935282    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:37.935294    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:37.947040    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:37.947052    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:37.960603    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:37.960614    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:40.473950    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:45.476471    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:45.477045    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:45.514943    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:45.515091    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:45.536463    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:45.536594    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:45.552382    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:45.552463    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:45.565522    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:45.565603    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:45.576115    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:45.576181    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:45.587242    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:45.587320    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:45.597611    6206 logs.go:282] 0 containers: []
	W1216 12:33:45.597624    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:45.597693    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:45.608136    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:45.608157    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:45.608162    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:45.620158    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:45.620170    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:45.639379    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:45.639394    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:45.654491    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:45.654505    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:45.691904    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:45.691912    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:45.695909    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:45.695917    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:45.707168    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:45.707177    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:45.722027    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:45.722041    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:45.746221    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:45.746230    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:45.758174    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:45.758186    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:45.770785    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:45.770800    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:45.784474    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:45.784485    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:45.796344    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:45.796357    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:45.808346    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:45.808358    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:45.819832    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:45.819844    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:45.854139    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:45.854149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:45.868441    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:45.868455    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:48.388311    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:33:53.390975    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:33:53.391174    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:33:53.402543    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:33:53.402620    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:33:53.413241    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:33:53.413344    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:33:53.424159    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:33:53.424239    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:33:53.434819    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:33:53.434900    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:33:53.451950    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:33:53.452024    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:33:53.463057    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:33:53.463129    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:33:53.473552    6206 logs.go:282] 0 containers: []
	W1216 12:33:53.473563    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:33:53.473625    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:33:53.485246    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:33:53.485266    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:33:53.485272    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:33:53.501041    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:33:53.501053    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:33:53.514547    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:33:53.514559    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:33:53.529009    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:33:53.529019    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:33:53.533259    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:33:53.533267    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:33:53.569791    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:33:53.569803    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:33:53.589671    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:33:53.589682    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:33:53.601814    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:33:53.601826    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:33:53.622169    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:33:53.622179    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:33:53.639675    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:33:53.639686    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:33:53.652000    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:33:53.652013    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:33:53.691040    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:33:53.691048    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:33:53.703864    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:33:53.703876    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:33:53.730144    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:33:53.730162    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:33:53.749965    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:33:53.749980    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:33:53.763381    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:33:53.763392    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:33:53.779707    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:33:53.779720    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:33:56.295311    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:01.298175    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:01.298383    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:01.312128    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:01.312226    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:01.323085    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:01.323169    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:01.334651    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:01.334724    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:01.347857    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:01.347928    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:01.358000    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:01.358080    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:01.368308    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:01.368389    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:01.380909    6206 logs.go:282] 0 containers: []
	W1216 12:34:01.380924    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:01.380986    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:01.391561    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:01.391584    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:01.391590    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:01.403137    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:01.403149    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:01.437512    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:01.437524    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:01.448596    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:01.448607    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:01.474143    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:01.474156    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:01.488124    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:01.488139    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:01.499456    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:01.499466    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:01.513149    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:01.513163    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:01.527810    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:01.527821    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:01.540520    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:01.540532    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:01.551887    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:01.551898    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:01.567277    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:01.567289    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:01.583047    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:01.583059    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:01.606989    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:01.606997    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:01.619565    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:01.619579    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:01.657618    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:01.657626    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:01.661663    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:01.661671    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:04.173794    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:09.176014    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:09.176300    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:09.187155    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:09.187244    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:09.201528    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:09.201613    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:09.212639    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:09.212706    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:09.223302    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:09.223382    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:09.234666    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:09.234753    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:09.245816    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:09.245896    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:09.256828    6206 logs.go:282] 0 containers: []
	W1216 12:34:09.256840    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:09.256918    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:09.267474    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:09.267493    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:09.267499    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:09.282453    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:09.282466    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:09.294035    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:09.294050    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:09.309558    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:09.309571    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:09.324553    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:09.324567    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:09.336578    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:09.336592    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:09.340966    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:09.340975    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:09.353453    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:09.353464    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:09.365185    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:09.365195    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:09.386420    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:09.386430    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:09.425347    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:09.425361    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:09.437963    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:09.437976    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:09.451487    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:09.451499    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:09.491915    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:09.491931    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:09.507683    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:09.507696    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:09.523006    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:09.523020    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:09.535520    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:09.535534    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:12.064448    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:17.065104    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:17.065227    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:17.080971    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:17.081051    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:17.100362    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:17.100447    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:17.111458    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:17.111534    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:17.122918    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:17.123001    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:17.134076    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:17.134161    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:17.147053    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:17.147140    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:17.158235    6206 logs.go:282] 0 containers: []
	W1216 12:34:17.158246    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:17.158316    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:17.170004    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:17.170028    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:17.170035    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:17.187681    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:17.187697    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:17.201010    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:17.201028    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:17.213035    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:17.213047    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:17.232924    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:17.232941    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:17.257237    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:17.257256    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:17.270818    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:17.270831    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:17.312087    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:17.312101    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:17.327599    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:17.327613    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:17.341104    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:17.341118    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:17.359633    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:17.359646    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:17.380708    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:17.380719    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:17.397405    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:17.397516    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:17.423378    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:17.423394    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:17.435653    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:17.435668    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:17.450978    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:17.450991    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:17.490896    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:17.490908    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:19.998134    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:25.000337    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:25.000896    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:25.046853    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:25.047012    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:25.067722    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:25.067831    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:25.082522    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:25.082598    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:25.094277    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:25.094362    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:25.105006    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:25.105085    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:25.115572    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:25.115644    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:25.126184    6206 logs.go:282] 0 containers: []
	W1216 12:34:25.126199    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:25.126262    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:25.136923    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:25.136941    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:25.136947    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:25.176206    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:25.176213    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:25.187543    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:25.187554    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:25.205314    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:25.205323    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:25.230019    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:25.230026    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:25.242096    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:25.242109    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:25.254214    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:25.254227    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:25.259081    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:25.259090    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:25.270675    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:25.270688    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:25.284664    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:25.284677    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:25.298935    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:25.298944    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:25.313896    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:25.313908    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:25.328973    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:25.328985    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:25.342645    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:25.342660    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:25.355905    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:25.355917    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:25.367941    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:25.367951    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:25.401296    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:25.401308    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:27.923028    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:32.925755    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:32.925864    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:32.938750    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:32.938837    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:32.956296    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:32.956422    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:32.966788    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:32.966880    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:32.977549    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:32.977629    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:32.987794    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:32.987871    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:32.998903    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:32.998979    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:33.008996    6206 logs.go:282] 0 containers: []
	W1216 12:34:33.009010    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:33.009084    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:33.019674    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:33.019696    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:33.019703    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:33.031219    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:33.031230    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:33.046884    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:33.046898    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:33.084428    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:33.084440    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:33.107427    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:33.107437    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:33.122878    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:33.122889    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:33.140374    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:33.140384    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:33.164098    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:33.164107    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:33.176225    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:33.176236    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:33.190261    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:33.190274    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:33.201756    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:33.201769    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:33.212921    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:33.212931    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:33.253252    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:33.253261    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:33.265555    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:33.265567    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:33.283265    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:33.283281    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:33.287780    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:33.287788    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:33.301297    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:33.301310    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:35.814639    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:40.817462    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:40.817611    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:40.835880    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:40.835963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:40.846550    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:40.846626    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:40.857647    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:40.857726    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:40.868268    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:40.868349    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:40.879576    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:40.879644    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:40.890911    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:40.890985    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:40.901046    6206 logs.go:282] 0 containers: []
	W1216 12:34:40.901057    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:40.901126    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:40.911471    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:40.911491    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:40.911496    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:40.923544    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:40.923556    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:40.935336    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:40.935347    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:40.950031    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:40.950040    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:40.975527    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:40.975534    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:41.019179    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:41.019187    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:41.033665    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:41.033676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:41.057630    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:41.057643    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:41.069180    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:41.069193    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:41.081434    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:41.081445    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:41.098174    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:41.098186    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:41.114008    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:41.114020    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:41.125634    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:41.125646    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:41.141271    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:41.141281    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:41.145833    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:41.145842    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:41.182521    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:41.182532    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:41.197592    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:41.197603    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:43.716555    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:48.718831    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:48.718956    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:48.733168    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:48.733256    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:48.744389    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:48.744479    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:48.760064    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:48.760147    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:48.772871    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:48.772963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:48.784539    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:48.784623    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:48.795997    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:48.796083    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:48.808124    6206 logs.go:282] 0 containers: []
	W1216 12:34:48.808135    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:48.808202    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:48.821029    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:48.821048    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:48.821057    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:48.825741    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:48.825753    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:48.840922    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:48.840934    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:48.855171    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:48.855183    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:48.873642    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:48.873656    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:48.887007    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:48.887023    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:48.899421    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:48.899433    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:48.922551    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:48.922559    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:48.937386    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:48.937401    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:48.962129    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:48.962140    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:48.972972    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:48.972984    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:48.984879    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:48.984889    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:49.022194    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:49.022204    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:49.038595    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:49.038608    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:49.076087    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:49.076103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:49.088450    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:49.088466    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:49.104106    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:49.104115    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:51.618112    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:56.618614    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:56.618707    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:56.630832    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:56.630923    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:56.642647    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:56.642732    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:56.654969    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:56.655058    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:56.666823    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:56.666910    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:56.678442    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:56.678528    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:56.690739    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:56.690832    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:56.702343    6206 logs.go:282] 0 containers: []
	W1216 12:34:56.702353    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:56.702421    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:56.714907    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:56.714924    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:56.714930    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:56.737727    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:56.737740    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:56.754329    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:56.754342    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:56.767255    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:56.767272    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:56.808023    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:56.808039    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:56.821634    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:56.821648    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:56.846942    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:56.846957    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:56.861367    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:56.861379    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:56.866639    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:56.866649    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:56.881036    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:56.881053    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:56.896734    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:56.896746    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:56.913569    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:56.913581    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:56.930279    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:56.930291    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:56.942535    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:56.942549    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:56.984088    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:56.984103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:56.999452    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:56.999465    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:57.012526    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:57.012539    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:59.534270    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:04.536684    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:04.536785    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:04.548264    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:04.548343    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:04.559857    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:04.559930    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:04.570698    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:04.570772    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:04.581664    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:04.581745    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:04.597870    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:04.597984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:04.608714    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:04.608791    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:04.619330    6206 logs.go:282] 0 containers: []
	W1216 12:35:04.619342    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:04.619409    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:04.630435    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:04.630454    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:04.630460    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:04.646701    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:04.646714    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:04.663052    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:04.663068    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:04.675704    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:04.675717    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:04.689425    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:04.689436    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:04.704278    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:04.704293    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:04.729554    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:04.729564    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:04.741598    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:04.741611    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:04.778311    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:04.778322    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:04.790536    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:04.790549    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:04.809523    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:04.809538    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:04.822075    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:04.822088    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:04.837133    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:04.837148    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:04.853793    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:04.853806    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:04.858667    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:04.858676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:04.873844    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:04.873856    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:04.892622    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:04.892634    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:07.444402    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:12.446673    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:12.446829    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:12.458412    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:12.458498    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:12.469922    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:12.469996    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:12.480485    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:12.480576    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:12.491059    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:12.491137    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:12.501575    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:12.501647    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:12.515961    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:12.516031    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:12.528393    6206 logs.go:282] 0 containers: []
	W1216 12:35:12.528405    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:12.528474    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:12.539478    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:12.539497    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:12.539504    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:12.579970    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:12.579980    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:12.594290    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:12.594305    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:12.609710    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:12.609721    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:12.621415    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:12.621425    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:12.641493    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:12.641504    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:12.658803    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:12.658814    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:12.673416    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:12.673426    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:12.709494    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:12.709506    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:12.723835    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:12.723846    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:12.735982    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:12.735993    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:12.747584    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:12.747596    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:12.752167    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:12.752174    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:12.764217    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:12.764228    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:12.776287    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:12.776298    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:12.791241    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:12.791252    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:12.802902    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:12.802915    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:15.327281    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:20.328297    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:20.328496    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:20.341709    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:20.341798    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:20.353219    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:20.353303    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:20.364072    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:20.364155    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:20.374543    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:20.374624    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:20.385069    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:20.385154    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:20.395541    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:20.395624    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:20.406087    6206 logs.go:282] 0 containers: []
	W1216 12:35:20.406098    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:20.406158    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:20.416455    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:20.416475    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:20.416481    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:20.456771    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:20.456780    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:20.470619    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:20.470630    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:20.485390    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:20.485402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:20.496928    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:20.496938    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:20.507824    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:20.507838    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:20.529965    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:20.529972    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:20.534750    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:20.534760    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:20.549832    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:20.549843    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:20.560778    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:20.560793    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:20.576895    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:20.576906    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:20.588219    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:20.588230    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:20.623143    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:20.623155    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:20.641792    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:20.641803    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:20.657413    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:20.657423    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:20.669554    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:20.669569    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:20.682051    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:20.682062    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:23.195853    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:28.198160    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:28.198457    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:28.230554    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:28.230695    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:28.245980    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:28.246072    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:28.258634    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:28.258714    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:28.269773    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:28.269853    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:28.280170    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:28.280249    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:28.291094    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:28.291180    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:28.301943    6206 logs.go:282] 0 containers: []
	W1216 12:35:28.301959    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:28.302019    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:28.312298    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:28.312316    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:28.312321    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:28.350461    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:28.350470    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:28.389011    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:28.389023    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:28.404061    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:28.404073    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:28.416073    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:28.416087    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:28.433558    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:28.433569    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:28.446767    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:28.446781    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:28.451581    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:28.451587    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:28.466046    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:28.466058    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:28.478007    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:28.478018    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:28.489456    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:28.489466    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:28.511168    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:28.511177    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:28.522893    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:28.522903    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:28.538593    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:28.538604    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:28.552433    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:28.552444    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:28.568386    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:28.568397    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:28.582186    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:28.582196    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:31.095579    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:36.096117    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:36.096402    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:36.117664    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:36.117768    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:36.129970    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:36.130055    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:36.142674    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:36.142761    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:36.153804    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:36.153885    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:36.163898    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:36.163979    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:36.174457    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:36.174532    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:36.188203    6206 logs.go:282] 0 containers: []
	W1216 12:35:36.188214    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:36.188281    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:36.199549    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:36.199565    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:36.199571    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:36.211938    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:36.211951    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:36.226281    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:36.226292    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:36.244365    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:36.244376    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:36.258645    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:36.258656    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:36.281211    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:36.281218    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:36.319614    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:36.319622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:36.333676    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:36.333686    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:36.344961    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:36.344970    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:36.349145    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:36.349152    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:36.360014    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:36.360025    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:36.373172    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:36.373183    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:36.384540    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:36.384553    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:36.420610    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:36.420624    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:36.432594    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:36.432606    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:36.453068    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:36.453078    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:36.467642    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:36.467653    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:38.980825    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:43.983267    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:43.983376    6206 kubeadm.go:597] duration metric: took 4m4.910842833s to restartPrimaryControlPlane
	W1216 12:35:43.983475    6206 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 12:35:43.983513    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 12:35:45.050668    6206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.067133583s)
	I1216 12:35:45.050744    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:35:45.055686    6206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:35:45.058692    6206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:35:45.061420    6206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:35:45.061426    6206 kubeadm.go:157] found existing configuration files:
	
	I1216 12:35:45.061463    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf
	I1216 12:35:45.063847    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:35:45.063873    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:35:45.066780    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf
	I1216 12:35:45.069350    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:35:45.069378    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:35:45.071901    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf
	I1216 12:35:45.074951    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:35:45.074972    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:35:45.077675    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf
	I1216 12:35:45.080148    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:35:45.080182    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:35:45.083219    6206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:35:45.102251    6206 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 12:35:45.102284    6206 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:35:45.148704    6206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:35:45.148761    6206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:35:45.148801    6206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:35:45.199468    6206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:35:45.203478    6206 out.go:235]   - Generating certificates and keys ...
	I1216 12:35:45.203513    6206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:35:45.203544    6206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:35:45.203582    6206 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:35:45.203618    6206 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:35:45.203652    6206 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:35:45.203686    6206 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:35:45.203721    6206 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:35:45.203754    6206 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:35:45.203787    6206 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:35:45.203825    6206 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:35:45.203843    6206 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:35:45.203870    6206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:35:45.364272    6206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:35:45.532349    6206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:35:45.571457    6206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:35:45.818675    6206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:35:45.848878    6206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:35:45.849254    6206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:35:45.849331    6206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:35:45.932475    6206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:35:45.936644    6206 out.go:235]   - Booting up control plane ...
	I1216 12:35:45.936690    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:35:45.936730    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:35:45.936766    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:35:45.936811    6206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:35:45.936889    6206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:35:50.437575    6206 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502609 seconds
	I1216 12:35:50.437639    6206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 12:35:50.441830    6206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 12:35:50.954072    6206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 12:35:50.954274    6206 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-868000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 12:35:51.457906    6206 kubeadm.go:310] [bootstrap-token] Using token: mzbv99.ptmg9051t5oylp1h
	I1216 12:35:51.462988    6206 out.go:235]   - Configuring RBAC rules ...
	I1216 12:35:51.463068    6206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 12:35:51.463114    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 12:35:51.465358    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 12:35:51.470436    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 12:35:51.471236    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 12:35:51.472227    6206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 12:35:51.475380    6206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 12:35:51.656188    6206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 12:35:51.861573    6206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 12:35:51.862119    6206 kubeadm.go:310] 
	I1216 12:35:51.862159    6206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 12:35:51.862166    6206 kubeadm.go:310] 
	I1216 12:35:51.862208    6206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 12:35:51.862215    6206 kubeadm.go:310] 
	I1216 12:35:51.862226    6206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 12:35:51.862254    6206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 12:35:51.862281    6206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 12:35:51.862283    6206 kubeadm.go:310] 
	I1216 12:35:51.862307    6206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 12:35:51.862310    6206 kubeadm.go:310] 
	I1216 12:35:51.862332    6206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 12:35:51.862334    6206 kubeadm.go:310] 
	I1216 12:35:51.862361    6206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 12:35:51.862395    6206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 12:35:51.862432    6206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 12:35:51.862437    6206 kubeadm.go:310] 
	I1216 12:35:51.862476    6206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 12:35:51.862514    6206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 12:35:51.862518    6206 kubeadm.go:310] 
	I1216 12:35:51.862563    6206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mzbv99.ptmg9051t5oylp1h \
	I1216 12:35:51.862617    6206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f \
	I1216 12:35:51.862632    6206 kubeadm.go:310] 	--control-plane 
	I1216 12:35:51.862636    6206 kubeadm.go:310] 
	I1216 12:35:51.862679    6206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 12:35:51.862682    6206 kubeadm.go:310] 
	I1216 12:35:51.862720    6206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mzbv99.ptmg9051t5oylp1h \
	I1216 12:35:51.862769    6206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f 
	I1216 12:35:51.862818    6206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:35:51.862826    6206 cni.go:84] Creating CNI manager for ""
	I1216 12:35:51.862835    6206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:35:51.867062    6206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 12:35:51.875006    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 12:35:51.877947    6206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 12:35:51.885716    6206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 12:35:51.885805    6206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 12:35:51.885896    6206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-868000 minikube.k8s.io/updated_at=2024_12_16T12_35_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=running-upgrade-868000 minikube.k8s.io/primary=true
	I1216 12:35:51.918753    6206 ops.go:34] apiserver oom_adj: -16
	I1216 12:35:51.918751    6206 kubeadm.go:1113] duration metric: took 33.005791ms to wait for elevateKubeSystemPrivileges
	I1216 12:35:51.928522    6206 kubeadm.go:394] duration metric: took 4m12.870507667s to StartCluster
	I1216 12:35:51.928541    6206 settings.go:142] acquiring lock: {Name:mk8b3a21b6dc2a47a05d302a72ae4dd9a4679c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:51.928638    6206 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:35:51.929044    6206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:51.929245    6206 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:35:51.929268    6206 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 12:35:51.929305    6206 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-868000"
	I1216 12:35:51.929325    6206 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-868000"
	W1216 12:35:51.929330    6206 addons.go:243] addon storage-provisioner should already be in state true
	I1216 12:35:51.929344    6206 host.go:66] Checking if "running-upgrade-868000" exists ...
	I1216 12:35:51.929367    6206 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-868000"
	I1216 12:35:51.929403    6206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-868000"
	I1216 12:35:51.929534    6206 config.go:182] Loaded profile config "running-upgrade-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:35:51.930535    6206 kapi.go:59] client config for running-upgrade-868000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104c82f70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:35:51.930817    6206 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-868000"
	W1216 12:35:51.930822    6206 addons.go:243] addon default-storageclass should already be in state true
	I1216 12:35:51.930829    6206 host.go:66] Checking if "running-upgrade-868000" exists ...
	I1216 12:35:51.932968    6206 out.go:177] * Verifying Kubernetes components...
	I1216 12:35:51.933301    6206 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 12:35:51.936098    6206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 12:35:51.936105    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:35:51.939005    6206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:51.943010    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:35:51.947019    6206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:35:51.947026    6206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 12:35:51.947033    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:35:52.028654    6206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:35:52.033878    6206 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:35:52.033933    6206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:52.037666    6206 api_server.go:72] duration metric: took 108.407334ms to wait for apiserver process to appear ...
	I1216 12:35:52.037675    6206 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:35:52.037681    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:52.053129    6206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 12:35:52.069317    6206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:35:52.411634    6206 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 12:35:52.411646    6206 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 12:35:57.039865    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:57.039916    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:02.040396    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:02.040445    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:07.040909    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:07.040938    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:12.041510    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:12.041545    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:17.042695    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:17.042738    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:22.043838    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:22.043927    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 12:36:22.414325    6206 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 12:36:22.418503    6206 out.go:177] * Enabled addons: storage-provisioner
	I1216 12:36:22.426666    6206 addons.go:510] duration metric: took 30.497137917s for enable addons: enabled=[storage-provisioner]
	I1216 12:36:27.045465    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:27.045510    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:32.047318    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:32.047363    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:37.048343    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:37.048396    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:42.050674    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:42.050701    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:47.052976    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:47.053016    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:52.055332    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:52.055504    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:52.078585    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:36:52.078660    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:52.095291    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:36:52.095368    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:52.108278    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:36:52.108350    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:52.120314    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:36:52.120390    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:52.132234    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:36:52.132315    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:52.144114    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:36:52.144203    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:52.156000    6206 logs.go:282] 0 containers: []
	W1216 12:36:52.156011    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:52.156073    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:52.171188    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:36:52.171204    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:52.171211    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:52.175961    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:52.175972    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:52.216619    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:36:52.216631    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:36:52.233058    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:36:52.233073    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:36:52.247983    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:36:52.247996    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:36:52.263768    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:36:52.263782    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:36:52.287306    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:36:52.287318    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:36:52.310258    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:36:52.310271    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:52.321522    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:52.321536    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:52.360518    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:36:52.360529    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:36:52.372124    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:36:52.372136    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:36:52.383343    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:36:52.383354    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:36:52.400059    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:52.400068    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:54.927616    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:59.930257    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:59.930361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:59.941843    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:36:59.941925    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:59.953113    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:36:59.953195    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:59.968246    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:36:59.968325    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:59.979348    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:36:59.979428    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:59.991116    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:36:59.991200    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:00.003140    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:00.003224    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:00.014225    6206 logs.go:282] 0 containers: []
	W1216 12:37:00.014238    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:00.014310    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:00.025462    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:00.025478    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:00.025484    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:00.037896    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:00.037911    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:00.074256    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:00.074268    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:00.086605    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:00.086621    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:00.098072    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:00.098086    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:00.115468    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:00.115482    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:00.140309    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:00.140327    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:00.159017    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:00.159026    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:00.170868    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:00.170881    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:00.209120    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:00.209130    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:00.213663    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:00.213673    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:00.228642    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:00.228654    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:00.242841    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:00.242852    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:02.756475    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:07.758695    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:07.758795    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:07.770880    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:07.770956    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:07.782228    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:07.782301    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:07.793321    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:07.793400    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:07.806549    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:07.806621    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:07.818257    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:07.818300    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:07.829539    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:07.829618    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:07.840259    6206 logs.go:282] 0 containers: []
	W1216 12:37:07.840272    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:07.840343    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:07.851343    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:07.851359    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:07.851365    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:07.855971    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:07.855981    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:07.871926    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:07.871938    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:07.885323    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:07.885336    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:07.897716    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:07.897730    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:07.913592    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:07.913605    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:07.932388    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:07.932407    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:07.945424    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:07.945437    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:07.984619    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:07.984631    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:08.022197    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:08.022206    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:08.037620    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:08.037633    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:08.049802    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:08.049816    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:08.062644    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:08.062655    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:10.589461    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:15.592166    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:15.592392    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:15.612669    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:15.612783    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:15.627113    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:15.627195    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:15.639901    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:15.639984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:15.650665    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:15.650749    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:15.662959    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:15.663048    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:15.673427    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:15.673505    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:15.684951    6206 logs.go:282] 0 containers: []
	W1216 12:37:15.684967    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:15.685041    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:15.696640    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:15.696659    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:15.696665    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:15.709242    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:15.709256    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:15.722000    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:15.722014    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:15.737822    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:15.737835    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:15.758502    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:15.758514    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:15.771088    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:15.771098    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:15.812215    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:15.812224    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:15.827234    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:15.827247    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:15.842455    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:15.842467    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:15.860936    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:15.860947    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:15.888135    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:15.888149    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:15.901061    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:15.901077    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:15.941744    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:15.941757    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:18.449267    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:23.451626    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:23.451809    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:23.464364    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:23.464448    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:23.474803    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:23.474884    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:23.485180    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:23.485259    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:23.496454    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:23.496534    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:23.507406    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:23.507491    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:23.517732    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:23.517813    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:23.528213    6206 logs.go:282] 0 containers: []
	W1216 12:37:23.528227    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:23.528308    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:23.539931    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:23.539947    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:23.539952    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:23.579416    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:23.579452    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:23.616199    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:23.616209    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:23.635499    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:23.635512    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:23.647871    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:23.647884    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:23.660643    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:23.660657    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:23.685019    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:23.685032    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:23.697841    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:23.697855    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:23.702730    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:23.702741    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:23.717749    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:23.717760    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:23.732031    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:23.732045    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:23.751135    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:23.751148    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:23.763517    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:23.763527    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:26.292370    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:31.294661    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:31.294913    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:31.320986    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:31.321096    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:31.335183    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:31.335271    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:31.347413    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:31.347488    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:31.360199    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:31.360277    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:31.372110    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:31.372189    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:31.382667    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:31.382749    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:31.393260    6206 logs.go:282] 0 containers: []
	W1216 12:37:31.393274    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:31.393340    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:31.403693    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:31.403717    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:31.403722    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:31.414930    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:31.414941    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:31.439405    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:31.439417    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:31.477168    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:31.477178    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:31.519092    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:31.519103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:31.531693    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:31.531708    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:31.550565    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:31.550580    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:31.566356    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:31.566372    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:31.579538    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:31.579551    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:31.591879    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:31.591892    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:31.597292    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:31.597304    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:31.612685    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:31.612697    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:31.627012    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:31.627024    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:34.140927    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:39.143467    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:39.143963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:39.178975    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:39.179141    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:39.197496    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:39.197599    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:39.213942    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:39.214028    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:39.226368    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:39.226460    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:39.241040    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:39.241125    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:39.252413    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:39.252497    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:39.263660    6206 logs.go:282] 0 containers: []
	W1216 12:37:39.263673    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:39.263750    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:39.273982    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:39.273997    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:39.274005    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:39.311704    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:39.311720    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:39.324156    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:39.324172    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:39.341272    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:39.341283    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:39.354129    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:39.354140    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:39.366788    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:39.366802    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:39.371194    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:39.371206    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:39.407113    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:39.407126    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:39.422394    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:39.422406    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:39.437237    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:39.437249    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:39.452535    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:39.452545    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:39.481034    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:39.481046    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:39.507404    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:39.507419    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:42.021556    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:47.023938    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:47.024159    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:47.041645    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:47.041740    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:47.056643    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:47.056724    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:47.067830    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:47.067909    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:47.078908    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:47.078984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:47.089000    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:47.089079    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:47.100532    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:47.100609    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:47.110680    6206 logs.go:282] 0 containers: []
	W1216 12:37:47.110694    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:47.110757    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:47.122059    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:47.122075    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:47.122080    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:47.138166    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:47.138181    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:47.159424    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:47.159437    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:47.197714    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:47.197724    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:47.202314    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:47.202321    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:47.239825    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:47.239837    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:47.258819    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:47.258830    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:47.275244    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:47.275254    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:47.300468    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:47.300482    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:47.315642    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:47.315655    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:47.331006    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:47.331022    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:47.344259    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:47.344269    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:47.356175    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:47.356187    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:49.877203    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:54.879564    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:54.879738    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:54.895736    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:54.895825    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:54.908638    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:54.908718    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:54.919883    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:37:54.919966    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:54.931449    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:54.931523    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:54.942481    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:54.942551    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:54.953406    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:54.953486    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:54.964492    6206 logs.go:282] 0 containers: []
	W1216 12:37:54.964508    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:54.964580    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:54.976148    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:54.976166    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:54.976172    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:55.013131    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:37:55.013144    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:37:55.025364    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:55.025381    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:55.044392    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:55.044403    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:55.060293    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:55.060305    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:55.072114    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:55.072125    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:55.108866    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:55.108874    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:55.123778    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:37:55.123790    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:37:55.135059    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:55.135070    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:55.147045    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:55.147059    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:55.164640    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:55.164653    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:55.177266    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:55.177276    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:55.182131    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:55.182138    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:55.195162    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:55.195177    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:55.215739    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:55.215750    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:57.743313    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:02.743875    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:02.744132    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:02.764924    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:02.765035    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:02.780484    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:02.780564    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:02.793129    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:02.793219    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:02.805197    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:02.805270    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:02.816722    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:02.816792    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:02.827914    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:02.827986    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:02.839216    6206 logs.go:282] 0 containers: []
	W1216 12:38:02.839226    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:02.839285    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:02.850376    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:02.850396    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:02.850402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:02.863282    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:02.863293    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:02.867625    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:02.867635    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:02.881612    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:02.881623    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:02.894055    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:02.894071    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:02.912564    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:02.912579    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:02.930221    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:02.930232    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:02.942731    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:02.942741    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:02.955771    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:02.955783    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:02.967205    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:02.967216    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:02.979064    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:02.979074    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:02.990881    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:02.990891    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:03.015801    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:03.015809    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:03.051843    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:03.051853    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:03.090208    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:03.090223    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:05.608643    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:10.611035    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:10.611444    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:10.644775    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:10.644903    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:10.665140    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:10.665242    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:10.680636    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:10.680727    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:10.693512    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:10.693594    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:10.704679    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:10.704760    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:10.716237    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:10.716312    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:10.727315    6206 logs.go:282] 0 containers: []
	W1216 12:38:10.727329    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:10.727398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:10.743252    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:10.743269    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:10.743276    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:10.748207    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:10.748215    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:10.790361    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:10.790391    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:10.803609    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:10.803622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:10.818904    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:10.818917    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:10.830998    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:10.831010    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:10.854518    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:10.854525    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:10.890677    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:10.890687    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:10.905827    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:10.905839    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:10.920336    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:10.920346    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:10.932514    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:10.932528    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:10.944775    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:10.944785    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:10.963642    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:10.963655    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:10.976224    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:10.976235    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:10.992259    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:10.992271    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:13.506914    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:18.509338    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:18.509593    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:18.530974    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:18.531085    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:18.546996    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:18.547084    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:18.559694    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:18.559775    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:18.571108    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:18.571189    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:18.582505    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:18.582591    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:18.594159    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:18.594237    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:18.605870    6206 logs.go:282] 0 containers: []
	W1216 12:38:18.605881    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:18.605942    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:18.618538    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:18.618555    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:18.618563    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:18.657468    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:18.657478    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:18.702364    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:18.702376    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:18.714284    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:18.714296    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:18.729856    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:18.729869    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:18.742634    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:18.742650    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:18.747113    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:18.747120    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:18.759369    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:18.759379    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:18.783851    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:18.783864    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:18.798515    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:18.798529    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:18.810737    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:18.810749    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:18.822652    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:18.822663    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:18.841310    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:18.841322    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:18.855530    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:18.855540    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:18.872392    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:18.872402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:21.387879    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:26.390261    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:26.390541    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:26.414057    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:26.414185    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:26.430693    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:26.430773    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:26.444092    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:26.444181    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:26.458993    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:26.459065    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:26.469575    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:26.469654    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:26.483195    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:26.483270    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:26.493670    6206 logs.go:282] 0 containers: []
	W1216 12:38:26.493684    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:26.493752    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:26.508468    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:26.508488    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:26.508493    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:26.545732    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:26.545742    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:26.558654    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:26.558672    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:26.572699    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:26.572716    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:26.584811    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:26.584825    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:26.596537    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:26.596551    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:26.601465    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:26.601471    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:26.637306    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:26.637318    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:26.651315    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:26.651326    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:26.664082    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:26.664094    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:26.693610    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:26.693628    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:26.708531    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:26.708543    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:26.721497    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:26.721510    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:26.734315    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:26.734325    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:26.747069    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:26.747085    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:29.273719    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:34.276120    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:34.276268    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:34.289799    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:34.289888    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:34.309699    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:34.309783    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:34.321897    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:34.321973    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:34.332485    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:34.332550    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:34.343160    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:34.343224    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:34.353480    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:34.353563    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:34.363430    6206 logs.go:282] 0 containers: []
	W1216 12:38:34.363441    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:34.363511    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:34.373558    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:34.373578    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:34.373584    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:34.385889    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:34.385900    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:34.399732    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:34.399744    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:34.413920    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:34.413932    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:34.425273    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:34.425286    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:34.443107    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:34.443117    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:34.457567    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:34.457579    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:34.475735    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:34.475747    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:34.497629    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:34.497641    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:34.521653    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:34.521663    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:34.557904    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:34.557912    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:34.562241    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:34.562250    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:34.573993    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:34.574004    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:34.607983    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:34.607994    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:34.619722    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:34.619732    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:37.133723    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:42.136439    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:42.136640    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:42.154261    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:42.154361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:42.167479    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:42.167560    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:42.178670    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:42.178751    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:42.189091    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:42.189160    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:42.199646    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:42.199716    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:42.209678    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:42.209748    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:42.220086    6206 logs.go:282] 0 containers: []
	W1216 12:38:42.220097    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:42.220162    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:42.230542    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:42.230561    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:42.230568    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:42.245678    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:42.245688    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:42.284745    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:42.284755    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:42.298005    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:42.298016    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:42.319366    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:42.319379    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:42.330980    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:42.330993    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:42.342834    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:42.342848    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:42.367645    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:42.367656    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:42.372409    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:42.372417    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:42.411135    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:42.411149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:42.422558    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:42.422569    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:42.437165    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:42.437176    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:42.455688    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:42.455699    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:42.468131    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:42.468143    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:42.482955    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:42.482965    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:44.999792    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:50.002485    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:50.002693    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:50.021945    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:50.022050    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:50.053077    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:50.053155    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:50.071298    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:50.071384    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:50.083544    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:50.083610    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:50.093764    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:50.093831    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:50.104556    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:50.104633    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:50.115134    6206 logs.go:282] 0 containers: []
	W1216 12:38:50.115146    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:50.115217    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:50.125824    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:50.125842    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:50.125847    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:50.140687    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:50.140696    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:50.155946    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:50.155958    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:50.167396    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:50.167408    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:50.202204    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:50.202219    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:50.214016    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:50.214032    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:50.225203    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:50.225214    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:50.262591    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:50.262598    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:50.267012    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:50.267018    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:50.280826    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:50.280837    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:50.292335    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:50.292350    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:50.316073    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:50.316088    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:50.330193    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:50.330207    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:50.342484    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:50.342495    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:50.354017    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:50.354030    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:52.876373    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:57.878783    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:57.879027    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:57.897763    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:57.897862    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:57.912100    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:57.912187    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:57.924236    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:57.924317    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:57.935006    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:57.935077    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:57.945651    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:57.945736    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:57.960027    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:57.960098    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:57.969960    6206 logs.go:282] 0 containers: []
	W1216 12:38:57.969971    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:57.970035    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:57.980312    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:57.980330    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:57.980337    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:57.985342    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:57.985348    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:57.996544    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:57.996560    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:58.033461    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:58.033472    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:58.046691    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:58.046703    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:58.085285    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:58.085301    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:58.097788    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:58.097800    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:58.111817    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:58.111832    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:58.123441    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:58.123455    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:58.137845    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:58.137858    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:58.152957    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:58.152967    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:58.164358    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:58.164371    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:58.175909    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:58.175920    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:58.190351    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:58.190362    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:58.214933    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:58.214947    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:00.743001    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:05.745496    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:05.745738    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:05.766056    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:05.766167    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:05.784486    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:05.784571    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:05.795930    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:05.796031    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:05.806329    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:05.806403    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:05.816329    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:05.816398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:05.831013    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:05.831088    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:05.844349    6206 logs.go:282] 0 containers: []
	W1216 12:39:05.844363    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:05.844429    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:05.854898    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:05.854915    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:05.854921    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:05.869666    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:05.869676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:05.887326    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:05.887339    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:05.899663    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:05.899678    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:05.914082    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:05.914092    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:05.925678    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:05.925690    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:05.950142    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:05.950155    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:05.954415    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:05.954421    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:05.965432    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:05.965444    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:05.979744    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:05.979754    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:05.991912    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:05.991923    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:06.008013    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:06.008024    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:06.020055    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:06.020065    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:06.031683    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:06.031694    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:06.070388    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:06.070397    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:08.607033    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:13.609270    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:13.609400    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:13.620756    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:13.620840    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:13.632381    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:13.632468    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:13.643582    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:13.643663    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:13.654216    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:13.654292    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:13.664325    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:13.664405    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:13.678885    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:13.678969    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:13.695180    6206 logs.go:282] 0 containers: []
	W1216 12:39:13.695192    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:13.695265    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:13.707113    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:13.707135    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:13.707142    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:13.729067    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:13.729081    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:13.746016    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:13.746030    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:13.758940    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:13.758954    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:13.778043    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:13.778064    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:13.819776    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:13.819798    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:13.831905    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:13.831919    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:13.846975    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:13.846994    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:13.860363    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:13.860377    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:13.871898    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:13.871912    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:13.883831    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:13.883844    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:13.909547    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:13.909566    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:13.921845    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:13.921855    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:13.926656    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:13.926669    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:13.969465    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:13.969478    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:16.486788    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:21.489023    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:21.489131    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:21.500854    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:21.500940    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:21.511836    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:21.511924    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:21.522402    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:21.522483    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:21.532813    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:21.532889    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:21.543267    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:21.543350    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:21.553709    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:21.553786    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:21.563897    6206 logs.go:282] 0 containers: []
	W1216 12:39:21.563907    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:21.563970    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:21.574137    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:21.574154    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:21.574160    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:21.578740    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:21.578747    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:21.595131    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:21.595146    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:21.607067    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:21.607078    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:21.645623    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:21.645633    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:21.657241    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:21.657252    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:21.669040    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:21.669050    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:21.680229    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:21.680240    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:21.706046    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:21.706053    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:21.746032    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:21.746044    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:21.760261    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:21.760274    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:21.777609    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:21.777622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:21.792260    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:21.792273    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:21.811051    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:21.811064    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:21.823158    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:21.823172    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:24.336662    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:29.338938    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:29.339056    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:29.350756    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:29.350832    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:29.362273    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:29.362361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:29.373954    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:29.374078    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:29.390550    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:29.390622    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:29.400827    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:29.400911    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:29.413535    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:29.413616    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:29.423654    6206 logs.go:282] 0 containers: []
	W1216 12:39:29.423665    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:29.423732    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:29.433918    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:29.433936    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:29.433942    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:29.448265    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:29.448275    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:29.460244    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:29.460257    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:29.471558    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:29.471569    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:29.496592    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:29.496600    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:29.533929    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:29.533942    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:29.547913    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:29.547923    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:29.559996    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:29.560008    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:29.565062    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:29.565069    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:29.576584    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:29.576595    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:29.615934    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:29.615945    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:29.627983    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:29.627996    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:29.639592    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:29.639603    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:29.658403    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:29.658414    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:29.675550    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:29.675560    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:32.193601    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:37.195316    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:37.195429    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:37.207014    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:37.207095    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:37.217327    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:37.217398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:37.232158    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:37.232239    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:37.243030    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:37.243105    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:37.253548    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:37.253621    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:37.266824    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:37.266904    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:37.277684    6206 logs.go:282] 0 containers: []
	W1216 12:39:37.277699    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:37.277766    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:37.288359    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:37.288377    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:37.288382    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:37.300067    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:37.300082    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:37.325136    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:37.325149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:37.336985    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:37.336997    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:37.349398    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:37.349411    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:37.361356    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:37.361368    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:37.373481    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:37.373494    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:37.392473    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:37.392484    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:37.407751    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:37.407762    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:37.421891    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:37.421907    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:37.437332    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:37.437349    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:37.449286    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:37.449300    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:37.461629    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:37.461641    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:37.498194    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:37.498204    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:37.502579    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:37.502587    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:40.040808    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:45.041737    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:45.041949    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:45.060686    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:45.060805    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:45.074527    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:45.074605    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:45.086603    6206 logs.go:282] 4 containers: [67be7aaf65be 63cd7ff1772f 6408be651234 857d26c080c8]
	I1216 12:39:45.086685    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:45.097846    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:45.097926    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:45.108500    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:45.108569    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:45.125052    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:45.125133    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:45.134963    6206 logs.go:282] 0 containers: []
	W1216 12:39:45.134975    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:45.135041    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:45.145616    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:45.145633    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:45.145638    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:45.157547    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:45.157563    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:45.182165    6206 logs.go:123] Gathering logs for coredns [67be7aaf65be] ...
	I1216 12:39:45.182181    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67be7aaf65be"
	I1216 12:39:45.193890    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:45.193901    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:45.205914    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:45.205926    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:45.217741    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:45.217754    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:45.232654    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:45.232667    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:45.267710    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:45.267721    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:45.282311    6206 logs.go:123] Gathering logs for coredns [63cd7ff1772f] ...
	I1216 12:39:45.282320    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63cd7ff1772f"
	I1216 12:39:45.293809    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:45.293822    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:45.305451    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:45.305461    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:45.310708    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:45.310717    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:45.325475    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:45.325486    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:45.343283    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:45.343294    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:45.355384    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:45.355399    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:47.896773    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:52.899236    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:52.906402    6206 out.go:201] 
	W1216 12:39:52.910301    6206 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1216 12:39:52.910319    6206 out.go:270] * 
	* 
	W1216 12:39:52.911692    6206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:39:52.921271    6206 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-868000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-16 12:39:53.033161 -0800 PST m=+3922.908239709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-868000 -n running-upgrade-868000
E1216 12:40:01.734576    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-868000 -n running-upgrade-868000: exit status 2 (15.759853875s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-868000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-039000          | force-systemd-flag-039000 | jenkins | v1.34.0 | 16 Dec 24 12:29 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-227000              | force-systemd-env-227000  | jenkins | v1.34.0 | 16 Dec 24 12:29 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-227000           | force-systemd-env-227000  | jenkins | v1.34.0 | 16 Dec 24 12:29 PST | 16 Dec 24 12:29 PST |
	| start   | -p docker-flags-213000                | docker-flags-213000       | jenkins | v1.34.0 | 16 Dec 24 12:29 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-039000             | force-systemd-flag-039000 | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-039000          | force-systemd-flag-039000 | jenkins | v1.34.0 | 16 Dec 24 12:30 PST | 16 Dec 24 12:30 PST |
	| start   | -p cert-expiration-027000             | cert-expiration-027000    | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-213000 ssh               | docker-flags-213000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-213000 ssh               | docker-flags-213000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-213000                | docker-flags-213000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST | 16 Dec 24 12:30 PST |
	| start   | -p cert-options-970000                | cert-options-970000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-970000 ssh               | cert-options-970000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-970000 -- sudo        | cert-options-970000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-970000                | cert-options-970000       | jenkins | v1.34.0 | 16 Dec 24 12:30 PST | 16 Dec 24 12:30 PST |
	| start   | -p running-upgrade-868000             | minikube                  | jenkins | v1.26.0 | 16 Dec 24 12:30 PST | 16 Dec 24 12:31 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-868000             | running-upgrade-868000    | jenkins | v1.34.0 | 16 Dec 24 12:31 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-027000             | cert-expiration-027000    | jenkins | v1.34.0 | 16 Dec 24 12:33 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-027000             | cert-expiration-027000    | jenkins | v1.34.0 | 16 Dec 24 12:33 PST | 16 Dec 24 12:33 PST |
	| start   | -p kubernetes-upgrade-781000          | kubernetes-upgrade-781000 | jenkins | v1.34.0 | 16 Dec 24 12:33 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-781000          | kubernetes-upgrade-781000 | jenkins | v1.34.0 | 16 Dec 24 12:33 PST | 16 Dec 24 12:33 PST |
	| start   | -p kubernetes-upgrade-781000          | kubernetes-upgrade-781000 | jenkins | v1.34.0 | 16 Dec 24 12:33 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-781000          | kubernetes-upgrade-781000 | jenkins | v1.34.0 | 16 Dec 24 12:33 PST | 16 Dec 24 12:33 PST |
	| start   | -p stopped-upgrade-349000             | minikube                  | jenkins | v1.26.0 | 16 Dec 24 12:33 PST | 16 Dec 24 12:34 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-349000 stop           | minikube                  | jenkins | v1.26.0 | 16 Dec 24 12:34 PST | 16 Dec 24 12:34 PST |
	| start   | -p stopped-upgrade-349000             | stopped-upgrade-349000    | jenkins | v1.34.0 | 16 Dec 24 12:34 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 12:34:33
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 12:34:33.653743    6375 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:34:33.653922    6375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:34:33.653926    6375 out.go:358] Setting ErrFile to fd 2...
	I1216 12:34:33.653928    6375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:34:33.654095    6375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:34:33.655359    6375 out.go:352] Setting JSON to false
	I1216 12:34:33.675477    6375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3844,"bootTime":1734377429,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:34:33.675581    6375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:34:33.680559    6375 out.go:177] * [stopped-upgrade-349000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:34:33.688517    6375 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:34:33.688546    6375 notify.go:220] Checking for updates...
	I1216 12:34:33.696472    6375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:34:33.699490    6375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:34:33.703510    6375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:34:33.706557    6375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:34:33.709522    6375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:34:33.712781    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:34:33.716547    6375 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 12:34:33.719472    6375 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:34:33.722488    6375 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:34:33.728441    6375 start.go:297] selected driver: qemu2
	I1216 12:34:33.728518    6375 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:34:33.728579    6375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:34:33.731382    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:34:33.731416    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:34:33.731443    6375 start.go:340] cluster config:
	{Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:34:33.731495    6375 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:34:33.739540    6375 out.go:177] * Starting "stopped-upgrade-349000" primary control-plane node in "stopped-upgrade-349000" cluster
	I1216 12:34:33.743510    6375 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:34:33.743525    6375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1216 12:34:33.743537    6375 cache.go:56] Caching tarball of preloaded images
	I1216 12:34:33.743618    6375 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:34:33.743624    6375 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1216 12:34:33.743688    6375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/config.json ...
	I1216 12:34:33.744140    6375 start.go:360] acquireMachinesLock for stopped-upgrade-349000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:34:33.744184    6375 start.go:364] duration metric: took 38.75µs to acquireMachinesLock for "stopped-upgrade-349000"
	I1216 12:34:33.744192    6375 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:34:33.744197    6375 fix.go:54] fixHost starting: 
	I1216 12:34:33.744298    6375 fix.go:112] recreateIfNeeded on stopped-upgrade-349000: state=Stopped err=<nil>
	W1216 12:34:33.744306    6375 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:34:33.748322    6375 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-349000" ...
	I1216 12:34:32.925755    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:32.925864    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:32.938750    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:32.938837    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:32.956296    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:32.956422    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:32.966788    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:32.966880    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:32.977549    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:32.977629    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:32.987794    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:32.987871    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:32.998903    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:32.998979    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:33.008996    6206 logs.go:282] 0 containers: []
	W1216 12:34:33.009010    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:33.009084    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:33.019674    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:33.019696    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:33.019703    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:33.031219    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:33.031230    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:33.046884    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:33.046898    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:33.084428    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:33.084440    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:33.107427    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:33.107437    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:33.122878    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:33.122889    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:33.140374    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:33.140384    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:33.164098    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:33.164107    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:33.176225    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:33.176236    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:33.190261    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:33.190274    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:33.201756    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:33.201769    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:33.212921    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:33.212931    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:33.253252    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:33.253261    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:33.265555    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:33.265567    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:33.283265    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:33.283281    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:33.287780    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:33.287788    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:33.301297    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:33.301310    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:33.756511    6375 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:34:33.756587    6375 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50988-:22,hostfwd=tcp::50989-:2376,hostname=stopped-upgrade-349000 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/disk.qcow2
	I1216 12:34:33.803880    6375 main.go:141] libmachine: STDOUT: 
	I1216 12:34:33.803911    6375 main.go:141] libmachine: STDERR: 
	I1216 12:34:33.803918    6375 main.go:141] libmachine: Waiting for VM to start (ssh -p 50988 docker@127.0.0.1)...
	I1216 12:34:35.814639    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:40.817462    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:40.817611    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:40.835880    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:40.835963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:40.846550    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:40.846626    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:40.857647    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:40.857726    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:40.868268    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:40.868349    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:40.879576    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:40.879644    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:40.890911    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:40.890985    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:40.901046    6206 logs.go:282] 0 containers: []
	W1216 12:34:40.901057    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:40.901126    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:40.911471    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:40.911491    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:40.911496    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:40.923544    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:40.923556    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:40.935336    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:40.935347    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:40.950031    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:40.950040    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:40.975527    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:40.975534    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:41.019179    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:41.019187    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:41.033665    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:41.033676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:41.057630    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:41.057643    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:41.069180    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:41.069193    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:41.081434    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:41.081445    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:41.098174    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:41.098186    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:41.114008    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:41.114020    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:41.125634    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:41.125646    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:41.141271    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:41.141281    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:41.145833    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:41.145842    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:41.182521    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:41.182532    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:41.197592    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:41.197603    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:43.716555    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:48.718831    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:48.718956    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:48.733168    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:48.733256    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:48.744389    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:48.744479    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:48.760064    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:48.760147    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:48.772871    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:48.772963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:48.784539    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:48.784623    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:48.795997    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:48.796083    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:48.808124    6206 logs.go:282] 0 containers: []
	W1216 12:34:48.808135    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:48.808202    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:48.821029    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:48.821048    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:48.821057    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:48.825741    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:48.825753    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:48.840922    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:48.840934    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:53.079455    6375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/config.json ...
	I1216 12:34:53.079993    6375 machine.go:93] provisionDockerMachine start ...
	I1216 12:34:53.080137    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.080455    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.080468    6375 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 12:34:53.160390    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 12:34:53.160403    6375 buildroot.go:166] provisioning hostname "stopped-upgrade-349000"
	I1216 12:34:53.160472    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.160586    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.160597    6375 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-349000 && echo "stopped-upgrade-349000" | sudo tee /etc/hostname
	I1216 12:34:53.234162    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-349000
	
	I1216 12:34:53.234229    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.234341    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.234349    6375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-349000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-349000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-349000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 12:34:53.304081    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:34:53.304095    6375 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20091-990/.minikube CaCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20091-990/.minikube}
	I1216 12:34:53.304103    6375 buildroot.go:174] setting up certificates
	I1216 12:34:53.304108    6375 provision.go:84] configureAuth start
	I1216 12:34:53.304115    6375 provision.go:143] copyHostCerts
	I1216 12:34:53.304201    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:34:53.304208    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:34:53.304329    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:34:53.304538    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:34:53.304542    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:34:53.304604    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:34:53.304719    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:34:53.304722    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:34:53.304786    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:34:53.304886    6375 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-349000 san=[127.0.0.1 localhost minikube stopped-upgrade-349000]
	I1216 12:34:53.361142    6375 provision.go:177] copyRemoteCerts
	I1216 12:34:53.361192    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:34:53.361200    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:53.398249    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 12:34:53.404893    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 12:34:53.412298    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 12:34:53.419668    6375 provision.go:87] duration metric: took 115.548459ms to configureAuth
	I1216 12:34:53.419677    6375 buildroot.go:189] setting minikube options for container-runtime
	I1216 12:34:53.419788    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:34:53.419836    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.419933    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.419938    6375 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 12:34:53.488108    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1216 12:34:53.488117    6375 buildroot.go:70] root file system type: tmpfs
	I1216 12:34:53.488170    6375 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 12:34:53.488236    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.488344    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.488378    6375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 12:34:53.558416    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 12:34:53.558474    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.558576    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.558584    6375 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 12:34:48.855171    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:48.855183    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:48.873642    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:48.873656    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:48.887007    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:48.887023    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:48.899421    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:48.899433    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:48.922551    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:48.922559    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:48.937386    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:48.937401    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:48.962129    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:48.962140    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:48.972972    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:48.972984    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:48.984879    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:48.984889    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:49.022194    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:49.022204    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:49.038595    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:49.038608    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:49.076087    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:49.076103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:49.088450    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:49.088466    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:49.104106    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:49.104115    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:51.618112    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:34:53.934476    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1216 12:34:53.934497    6375 machine.go:96] duration metric: took 854.486875ms to provisionDockerMachine
	I1216 12:34:53.934505    6375 start.go:293] postStartSetup for "stopped-upgrade-349000" (driver="qemu2")
	I1216 12:34:53.934512    6375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 12:34:53.934592    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 12:34:53.934603    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:53.971927    6375 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 12:34:53.973113    6375 info.go:137] Remote host: Buildroot 2021.02.12
	I1216 12:34:53.973122    6375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/addons for local assets ...
	I1216 12:34:53.973209    6375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/files for local assets ...
	I1216 12:34:53.973358    6375 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem -> 14942.pem in /etc/ssl/certs
	I1216 12:34:53.973518    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 12:34:53.976414    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:34:53.983574    6375 start.go:296] duration metric: took 49.063333ms for postStartSetup
	I1216 12:34:53.983587    6375 fix.go:56] duration metric: took 20.239221709s for fixHost
	I1216 12:34:53.983629    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.983742    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.983746    6375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 12:34:54.050221    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381294.540987129
	
	I1216 12:34:54.050231    6375 fix.go:216] guest clock: 1734381294.540987129
	I1216 12:34:54.050235    6375 fix.go:229] Guest: 2024-12-16 12:34:54.540987129 -0800 PST Remote: 2024-12-16 12:34:53.983589 -0800 PST m=+20.360056626 (delta=557.398129ms)
	I1216 12:34:54.050246    6375 fix.go:200] guest clock delta is within tolerance: 557.398129ms
	I1216 12:34:54.050250    6375 start.go:83] releasing machines lock for "stopped-upgrade-349000", held for 20.305890833s
	I1216 12:34:54.050326    6375 ssh_runner.go:195] Run: cat /version.json
	I1216 12:34:54.050329    6375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 12:34:54.050335    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:54.050347    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	W1216 12:34:54.050838    6375 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51136->127.0.0.1:50988: read: connection reset by peer
	I1216 12:34:54.050855    6375 retry.go:31] will retry after 331.051942ms: ssh: handshake failed: read tcp 127.0.0.1:51136->127.0.0.1:50988: read: connection reset by peer
	W1216 12:34:54.431048    6375 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1216 12:34:54.431233    6375 ssh_runner.go:195] Run: systemctl --version
	I1216 12:34:54.434906    6375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 12:34:54.438085    6375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 12:34:54.438159    6375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 12:34:54.443217    6375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 12:34:54.450890    6375 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 12:34:54.450905    6375 start.go:495] detecting cgroup driver to use...
	I1216 12:34:54.451026    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:34:54.460143    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1216 12:34:54.464391    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 12:34:54.468268    6375 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 12:34:54.468302    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 12:34:54.471848    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:34:54.475173    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 12:34:54.478133    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:34:54.481051    6375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 12:34:54.484261    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 12:34:54.487130    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 12:34:54.490119    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 12:34:54.492916    6375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 12:34:54.496014    6375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 12:34:54.498821    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:54.586833    6375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 12:34:54.593245    6375 start.go:495] detecting cgroup driver to use...
	I1216 12:34:54.593352    6375 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 12:34:54.598977    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:34:54.608415    6375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 12:34:54.619556    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:34:54.624147    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 12:34:54.628867    6375 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 12:34:54.680228    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 12:34:54.686043    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:34:54.691854    6375 ssh_runner.go:195] Run: which cri-dockerd
	I1216 12:34:54.693050    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 12:34:54.695940    6375 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 12:34:54.700915    6375 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 12:34:54.788450    6375 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 12:34:54.868374    6375 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 12:34:54.868443    6375 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 12:34:54.873882    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:54.960804    6375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:34:56.100125    6375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.139295958s)
	I1216 12:34:56.100199    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 12:34:56.106828    6375 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 12:34:56.113319    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:34:56.118060    6375 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 12:34:56.179413    6375 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 12:34:56.240247    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:56.316199    6375 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 12:34:56.322711    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:34:56.326887    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:56.404418    6375 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 12:34:56.441626    6375 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 12:34:56.441744    6375 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 12:34:56.444700    6375 start.go:563] Will wait 60s for crictl version
	I1216 12:34:56.444768    6375 ssh_runner.go:195] Run: which crictl
	I1216 12:34:56.446311    6375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 12:34:56.461764    6375 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1216 12:34:56.461846    6375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:34:56.482458    6375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:34:56.502211    6375 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1216 12:34:56.502358    6375 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1216 12:34:56.503661    6375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 12:34:56.507628    6375 kubeadm.go:883] updating cluster {Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1216 12:34:56.507671    6375 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:34:56.507724    6375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:34:56.518040    6375 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:34:56.518048    6375 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:34:56.518103    6375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:34:56.521196    6375 ssh_runner.go:195] Run: which lz4
	I1216 12:34:56.522490    6375 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 12:34:56.523691    6375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 12:34:56.523701    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1216 12:34:57.492812    6375 docker.go:653] duration metric: took 970.357875ms to copy over tarball
	I1216 12:34:57.492886    6375 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 12:34:56.618614    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:34:56.618707    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:34:56.630832    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:34:56.630923    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:34:56.642647    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:34:56.642732    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:34:56.654969    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:34:56.655058    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:34:56.666823    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:34:56.666910    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:34:56.678442    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:34:56.678528    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:34:56.690739    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:34:56.690832    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:34:56.702343    6206 logs.go:282] 0 containers: []
	W1216 12:34:56.702353    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:34:56.702421    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:34:56.714907    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:34:56.714924    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:34:56.714930    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:34:56.737727    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:34:56.737740    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:34:56.754329    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:34:56.754342    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:34:56.767255    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:34:56.767272    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:34:56.808023    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:34:56.808039    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:34:56.821634    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:34:56.821648    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:34:56.846942    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:34:56.846957    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:34:56.861367    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:34:56.861379    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:34:56.866639    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:34:56.866649    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:34:56.881036    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:34:56.881053    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:34:56.896734    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:34:56.896746    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:34:56.913569    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:34:56.913581    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:34:56.930279    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:34:56.930291    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:34:56.942535    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:34:56.942549    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:34:56.984088    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:34:56.984103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:34:56.999452    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:34:56.999465    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:34:57.012526    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:34:57.012539    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:34:58.675860    6375 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.182947375s)
	I1216 12:34:58.675882    6375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 12:34:58.691812    6375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:34:58.694918    6375 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1216 12:34:58.700186    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:58.788313    6375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:35:00.560497    6375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.772149959s)
	I1216 12:35:00.560601    6375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:35:00.575003    6375 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:35:00.575015    6375 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:35:00.575020    6375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 12:35:00.581779    6375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:00.583374    6375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:00.584844    6375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:00.585131    6375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:00.586124    6375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:00.586178    6375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:00.587547    6375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:00.587585    6375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:00.588682    6375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:00.590072    6375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:00.590187    6375 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 12:35:00.590237    6375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:00.591096    6375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:00.591571    6375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:00.592408    6375 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 12:35:00.593008    6375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.353066    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.364132    6375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1216 12:35:01.364171    6375 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.364222    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.372459    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.376058    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1216 12:35:01.376961    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.389817    6375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1216 12:35:01.389845    6375 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.389918    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.391369    6375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1216 12:35:01.391386    6375 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.391437    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.407434    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1216 12:35:01.409607    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1216 12:35:01.444936    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.455557    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.457100    6375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1216 12:35:01.457124    6375 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.457168    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.470705    6375 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1216 12:35:01.470716    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1216 12:35:01.470726    6375 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.470782    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.480995    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 12:35:01.564797    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 12:35:01.575198    6375 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1216 12:35:01.575220    6375 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1216 12:35:01.575282    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1216 12:35:01.589451    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 12:35:01.589594    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 12:35:01.591159    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1216 12:35:01.591169    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1216 12:35:01.599819    6375 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 12:35:01.599827    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1216 12:35:01.616869    6375 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 12:35:01.617021    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.625869    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1216 12:35:01.630481    6375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1216 12:35:01.630507    6375 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.630581    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.641507    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 12:35:01.641657    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:35:01.643037    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1216 12:35:01.643054    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1216 12:35:01.657985    6375 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 12:35:01.658132    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.678177    6375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 12:35:01.678203    6375 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.678268    6375 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.694214    6375 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:35:01.694228    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1216 12:35:01.702615    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 12:35:01.702783    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:35:01.741373    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 12:35:01.741399    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 12:35:01.741427    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 12:35:01.771613    6375 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:35:01.771626    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1216 12:35:02.004231    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 12:35:02.004271    6375 cache_images.go:92] duration metric: took 1.429231333s to LoadCachedImages
	W1216 12:35:02.004306    6375 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1216 12:35:02.004312    6375 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1216 12:35:02.004367    6375 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-349000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 12:35:02.004439    6375 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 12:35:02.018121    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:35:02.018133    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:35:02.018144    6375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 12:35:02.018153    6375 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-349000 NodeName:stopped-upgrade-349000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 12:35:02.018230    6375 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-349000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 12:35:02.018305    6375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1216 12:35:02.021179    6375 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 12:35:02.021241    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 12:35:02.024320    6375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1216 12:35:02.029223    6375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 12:35:02.034152    6375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1216 12:35:02.039855    6375 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1216 12:35:02.041036    6375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 12:35:02.044816    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:35:02.129520    6375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:35:02.135880    6375 certs.go:68] Setting up /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000 for IP: 10.0.2.15
	I1216 12:35:02.135888    6375 certs.go:194] generating shared ca certs ...
	I1216 12:35:02.135897    6375 certs.go:226] acquiring lock for ca certs: {Name:mkaa7d3f47c3893d22672057b4e8b1df7ff583ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.136080    6375 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key
	I1216 12:35:02.136855    6375 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key
	I1216 12:35:02.136864    6375 certs.go:256] generating profile certs ...
	I1216 12:35:02.137131    6375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key
	I1216 12:35:02.137146    6375 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09
	I1216 12:35:02.137159    6375 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1216 12:35:02.293901    6375 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 ...
	I1216 12:35:02.293915    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09: {Name:mk24cb9d1c208b94e44645be350fcae9c9cc59c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.294285    6375 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09 ...
	I1216 12:35:02.294290    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09: {Name:mkd8e63c1869763c83ae20b5c66ff321c7a7d066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.294464    6375 certs.go:381] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt
	I1216 12:35:02.294599    6375 certs.go:385] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key
	I1216 12:35:02.295006    6375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.key
	I1216 12:35:02.295227    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem (1338 bytes)
	W1216 12:35:02.295471    6375 certs.go:480] ignoring /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494_empty.pem, impossibly tiny 0 bytes
	I1216 12:35:02.295479    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 12:35:02.295511    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem (1082 bytes)
	I1216 12:35:02.295539    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem (1123 bytes)
	I1216 12:35:02.295562    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem (1675 bytes)
	I1216 12:35:02.295611    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:35:02.295996    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 12:35:02.303464    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 12:35:02.310544    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 12:35:02.317123    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 12:35:02.324254    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 12:35:02.330992    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 12:35:02.337253    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 12:35:02.344189    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 12:35:02.351214    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 12:35:02.357407    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem --> /usr/share/ca-certificates/1494.pem (1338 bytes)
	I1216 12:35:02.364500    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /usr/share/ca-certificates/14942.pem (1708 bytes)
	I1216 12:35:02.371472    6375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 12:35:02.376534    6375 ssh_runner.go:195] Run: openssl version
	I1216 12:35:02.378428    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 12:35:02.381282    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.382651    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.382681    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.384346    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 12:35:02.387449    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1494.pem && ln -fs /usr/share/ca-certificates/1494.pem /etc/ssl/certs/1494.pem"
	I1216 12:35:02.390441    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.391806    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.391838    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.393550    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1494.pem /etc/ssl/certs/51391683.0"
	I1216 12:35:02.396808    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14942.pem && ln -fs /usr/share/ca-certificates/14942.pem /etc/ssl/certs/14942.pem"
	I1216 12:35:02.400063    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.401488    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.401518    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.403229    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14942.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 12:35:02.406093    6375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 12:35:02.407527    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 12:35:02.409731    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 12:35:02.411638    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 12:35:02.413804    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 12:35:02.415537    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 12:35:02.417200    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 12:35:02.419105    6375 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:35:02.419183    6375 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:35:02.429398    6375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 12:35:02.432549    6375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 12:35:02.432559    6375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 12:35:02.432591    6375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 12:35:02.436016    6375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:35:02.436337    6375 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-349000" does not appear in /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:35:02.436437    6375 kubeconfig.go:62] /Users/jenkins/minikube-integration/20091-990/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-349000" cluster setting kubeconfig missing "stopped-upgrade-349000" context setting]
	I1216 12:35:02.436621    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.437090    6375 kapi.go:59] client config for stopped-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106cfef70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:35:02.437597    6375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 12:35:02.440434    6375 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-349000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1216 12:35:02.440440    6375 kubeadm.go:1160] stopping kube-system containers ...
	I1216 12:35:02.440487    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:35:02.450870    6375 docker.go:483] Stopping containers: [c238f990b3b5 195b09e77a13 a43b19631f1d 5c2af2bbc9dc 03ff67ad0d23 178b447de782 c3ca363e053e 22c7494ce80d]
	I1216 12:35:02.450948    6375 ssh_runner.go:195] Run: docker stop c238f990b3b5 195b09e77a13 a43b19631f1d 5c2af2bbc9dc 03ff67ad0d23 178b447de782 c3ca363e053e 22c7494ce80d
	I1216 12:35:02.461469    6375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 12:35:02.467262    6375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:35:02.469984    6375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:35:02.469990    6375 kubeadm.go:157] found existing configuration files:
	
	I1216 12:35:02.470025    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf
	I1216 12:35:02.472713    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:35:02.472746    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:35:02.475886    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf
	I1216 12:35:02.478506    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:35:02.478548    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:35:02.481011    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf
	I1216 12:35:02.484116    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:35:02.484142    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:35:02.487048    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf
	I1216 12:35:02.489486    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:35:02.489520    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:35:02.492430    6375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:35:02.495538    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:02.517741    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.146436    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.265437    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.290801    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.316258    6375 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:35:03.316346    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:34:59.534270    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:03.818414    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:04.318425    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:04.326084    6375 api_server.go:72] duration metric: took 1.009816458s to wait for apiserver process to appear ...
	I1216 12:35:04.326099    6375 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:35:04.326122    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:04.536684    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:04.536785    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:04.548264    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:04.548343    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:04.559857    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:04.559930    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:04.570698    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:04.570772    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:04.581664    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:04.581745    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:04.597870    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:04.597984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:04.608714    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:04.608791    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:04.619330    6206 logs.go:282] 0 containers: []
	W1216 12:35:04.619342    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:04.619409    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:04.630435    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:04.630454    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:04.630460    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:04.646701    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:04.646714    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:04.663052    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:04.663068    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:04.675704    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:04.675717    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:04.689425    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:04.689436    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:04.704278    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:04.704293    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:04.729554    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:04.729564    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:04.741598    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:04.741611    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:04.778311    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:04.778322    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:04.790536    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:04.790549    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:04.809523    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:04.809538    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:04.822075    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:04.822088    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:04.837133    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:04.837148    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:04.853793    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:04.853806    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:04.858667    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:04.858676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:04.873844    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:04.873856    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:04.892622    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:04.892634    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:07.444402    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:09.328248    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:09.328273    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:12.446673    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:12.446829    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:12.458412    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:12.458498    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:12.469922    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:12.469996    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:12.480485    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:12.480576    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:12.491059    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:12.491137    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:12.501575    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:12.501647    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:12.515961    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:12.516031    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:12.528393    6206 logs.go:282] 0 containers: []
	W1216 12:35:12.528405    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:12.528474    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:12.539478    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:12.539497    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:12.539504    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:12.579970    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:12.579980    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:12.594290    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:12.594305    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:12.609710    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:12.609721    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:12.621415    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:12.621425    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:12.641493    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:12.641504    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:12.658803    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:12.658814    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:12.673416    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:12.673426    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:12.709494    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:12.709506    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:12.723835    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:12.723846    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:12.735982    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:12.735993    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:12.747584    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:12.747596    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:12.752167    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:12.752174    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:12.764217    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:12.764228    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:12.776287    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:12.776298    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:12.791241    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:12.791252    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:12.802902    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:12.802915    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:14.328533    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:14.328572    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:15.327281    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:19.328921    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:19.328973    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:20.328297    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:20.328496    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:20.341709    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:20.341798    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:20.353219    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:20.353303    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:20.364072    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:20.364155    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:20.374543    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:20.374624    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:20.385069    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:20.385154    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:20.395541    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:20.395624    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:20.406087    6206 logs.go:282] 0 containers: []
	W1216 12:35:20.406098    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:20.406158    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:20.416455    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:20.416475    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:20.416481    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:20.456771    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:20.456780    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:20.470619    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:20.470630    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:20.485390    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:20.485402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:20.496928    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:20.496938    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:20.507824    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:20.507838    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:20.529965    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:20.529972    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:20.534750    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:20.534760    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:20.549832    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:20.549843    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:20.560778    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:20.560793    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:20.576895    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:20.576906    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:20.588219    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:20.588230    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:20.623143    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:20.623155    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:20.641792    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:20.641803    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:20.657413    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:20.657423    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:20.669554    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:20.669569    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:20.682051    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:20.682062    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:23.195853    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:24.329416    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:24.329456    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:28.198160    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:28.198457    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:28.230554    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:28.230695    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:28.245980    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:28.246072    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:28.258634    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:28.258714    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:28.269773    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:28.269853    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:28.280170    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:28.280249    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:28.291094    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:28.291180    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:28.301943    6206 logs.go:282] 0 containers: []
	W1216 12:35:28.301959    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:28.302019    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:28.312298    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:28.312316    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:28.312321    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:28.350461    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:28.350470    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:28.389011    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:28.389023    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:28.404061    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:28.404073    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:28.416073    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:28.416087    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:28.433558    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:28.433569    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:28.446767    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:28.446781    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:28.451581    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:28.451587    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:28.466046    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:28.466058    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:28.478007    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:28.478018    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:28.489456    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:28.489466    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:28.511168    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:28.511177    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:28.522893    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:28.522903    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:28.538593    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:28.538604    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:28.552433    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:28.552444    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:28.568386    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:28.568397    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:28.582186    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:28.582196    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:29.330376    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:29.330418    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:31.095579    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:34.331228    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:34.331267    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:36.096117    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:36.096402    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:35:36.117664    6206 logs.go:282] 2 containers: [0d6674c9b7eb da32f2743333]
	I1216 12:35:36.117768    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:35:36.129970    6206 logs.go:282] 2 containers: [d768a7d5b0f7 fbde9cba9173]
	I1216 12:35:36.130055    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:35:36.142674    6206 logs.go:282] 1 containers: [384c3825bc29]
	I1216 12:35:36.142761    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:35:36.153804    6206 logs.go:282] 2 containers: [7c8c7af1d861 64d46781ed55]
	I1216 12:35:36.153885    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:35:36.163898    6206 logs.go:282] 1 containers: [45ebf06ac090]
	I1216 12:35:36.163979    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:35:36.174457    6206 logs.go:282] 2 containers: [6300a84b316e 5c703b0416ad]
	I1216 12:35:36.174532    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:35:36.188203    6206 logs.go:282] 0 containers: []
	W1216 12:35:36.188214    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:35:36.188281    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:35:36.199549    6206 logs.go:282] 2 containers: [992be1e60f1f 1adc59a90281]
	I1216 12:35:36.199565    6206 logs.go:123] Gathering logs for kube-apiserver [da32f2743333] ...
	I1216 12:35:36.199571    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da32f2743333"
	I1216 12:35:36.211938    6206 logs.go:123] Gathering logs for etcd [d768a7d5b0f7] ...
	I1216 12:35:36.211951    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d768a7d5b0f7"
	I1216 12:35:36.226281    6206 logs.go:123] Gathering logs for kube-controller-manager [6300a84b316e] ...
	I1216 12:35:36.226292    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6300a84b316e"
	I1216 12:35:36.244365    6206 logs.go:123] Gathering logs for kube-controller-manager [5c703b0416ad] ...
	I1216 12:35:36.244376    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c703b0416ad"
	I1216 12:35:36.258645    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:35:36.258656    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:35:36.281211    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:35:36.281218    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:35:36.319614    6206 logs.go:123] Gathering logs for kube-apiserver [0d6674c9b7eb] ...
	I1216 12:35:36.319622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6674c9b7eb"
	I1216 12:35:36.333676    6206 logs.go:123] Gathering logs for kube-proxy [45ebf06ac090] ...
	I1216 12:35:36.333686    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45ebf06ac090"
	I1216 12:35:36.344961    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:35:36.344970    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:35:36.349145    6206 logs.go:123] Gathering logs for coredns [384c3825bc29] ...
	I1216 12:35:36.349152    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 384c3825bc29"
	I1216 12:35:36.360014    6206 logs.go:123] Gathering logs for storage-provisioner [1adc59a90281] ...
	I1216 12:35:36.360025    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1adc59a90281"
	I1216 12:35:36.373172    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:35:36.373183    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:35:36.384540    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:35:36.384553    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:35:36.420610    6206 logs.go:123] Gathering logs for storage-provisioner [992be1e60f1f] ...
	I1216 12:35:36.420624    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992be1e60f1f"
	I1216 12:35:36.432594    6206 logs.go:123] Gathering logs for kube-scheduler [64d46781ed55] ...
	I1216 12:35:36.432606    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64d46781ed55"
	I1216 12:35:36.453068    6206 logs.go:123] Gathering logs for etcd [fbde9cba9173] ...
	I1216 12:35:36.453078    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbde9cba9173"
	I1216 12:35:36.467642    6206 logs.go:123] Gathering logs for kube-scheduler [7c8c7af1d861] ...
	I1216 12:35:36.467653    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8c7af1d861"
	I1216 12:35:39.332529    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:39.332568    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:38.980825    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:43.983267    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:43.983376    6206 kubeadm.go:597] duration metric: took 4m4.910842833s to restartPrimaryControlPlane
	W1216 12:35:43.983475    6206 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 12:35:43.983513    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 12:35:45.050668    6206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.067133583s)
	I1216 12:35:45.050744    6206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:35:45.055686    6206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:35:45.058692    6206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:35:45.061420    6206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:35:45.061426    6206 kubeadm.go:157] found existing configuration files:
	
	I1216 12:35:45.061463    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf
	I1216 12:35:45.063847    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:35:45.063873    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:35:45.066780    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf
	I1216 12:35:45.069350    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:35:45.069378    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:35:45.071901    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf
	I1216 12:35:45.074951    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:35:45.074972    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:35:45.077675    6206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf
	I1216 12:35:45.080148    6206 kubeadm.go:163] "https://control-plane.minikube.internal:50805" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50805 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:35:45.080182    6206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:35:45.083219    6206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:35:45.102251    6206 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 12:35:45.102284    6206 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:35:45.148704    6206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:35:45.148761    6206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:35:45.148801    6206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:35:45.199468    6206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:35:45.203478    6206 out.go:235]   - Generating certificates and keys ...
	I1216 12:35:45.203513    6206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:35:45.203544    6206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:35:45.203582    6206 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:35:45.203618    6206 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:35:45.203652    6206 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:35:45.203686    6206 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:35:45.203721    6206 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:35:45.203754    6206 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:35:45.203787    6206 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:35:45.203825    6206 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:35:45.203843    6206 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:35:45.203870    6206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:35:45.364272    6206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:35:45.532349    6206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:35:45.571457    6206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:35:45.818675    6206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:35:45.848878    6206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:35:45.849254    6206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:35:45.849331    6206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:35:45.932475    6206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:35:44.333931    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:44.333965    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:45.936644    6206 out.go:235]   - Booting up control plane ...
	I1216 12:35:45.936690    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:35:45.936730    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:35:45.936766    6206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:35:45.936811    6206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:35:45.936889    6206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:35:50.437575    6206 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502609 seconds
	I1216 12:35:50.437639    6206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 12:35:50.441830    6206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 12:35:50.954072    6206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 12:35:50.954274    6206 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-868000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 12:35:51.457906    6206 kubeadm.go:310] [bootstrap-token] Using token: mzbv99.ptmg9051t5oylp1h
	I1216 12:35:51.462988    6206 out.go:235]   - Configuring RBAC rules ...
	I1216 12:35:51.463068    6206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 12:35:51.463114    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 12:35:51.465358    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 12:35:51.470436    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 12:35:51.471236    6206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 12:35:51.472227    6206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 12:35:51.475380    6206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 12:35:51.656188    6206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 12:35:51.861573    6206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 12:35:51.862119    6206 kubeadm.go:310] 
	I1216 12:35:51.862159    6206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 12:35:51.862166    6206 kubeadm.go:310] 
	I1216 12:35:51.862208    6206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 12:35:51.862215    6206 kubeadm.go:310] 
	I1216 12:35:51.862226    6206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 12:35:51.862254    6206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 12:35:51.862281    6206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 12:35:51.862283    6206 kubeadm.go:310] 
	I1216 12:35:51.862307    6206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 12:35:51.862310    6206 kubeadm.go:310] 
	I1216 12:35:51.862332    6206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 12:35:51.862334    6206 kubeadm.go:310] 
	I1216 12:35:51.862361    6206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 12:35:51.862395    6206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 12:35:51.862432    6206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 12:35:51.862437    6206 kubeadm.go:310] 
	I1216 12:35:51.862476    6206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 12:35:51.862514    6206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 12:35:51.862518    6206 kubeadm.go:310] 
	I1216 12:35:51.862563    6206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mzbv99.ptmg9051t5oylp1h \
	I1216 12:35:51.862617    6206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f \
	I1216 12:35:51.862632    6206 kubeadm.go:310] 	--control-plane 
	I1216 12:35:51.862636    6206 kubeadm.go:310] 
	I1216 12:35:51.862679    6206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 12:35:51.862682    6206 kubeadm.go:310] 
	I1216 12:35:51.862720    6206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mzbv99.ptmg9051t5oylp1h \
	I1216 12:35:51.862769    6206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f 
	I1216 12:35:51.862818    6206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:35:51.862826    6206 cni.go:84] Creating CNI manager for ""
	I1216 12:35:51.862835    6206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:35:51.867062    6206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 12:35:51.875006    6206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 12:35:51.877947    6206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 12:35:51.885716    6206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 12:35:51.885805    6206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 12:35:51.885896    6206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-868000 minikube.k8s.io/updated_at=2024_12_16T12_35_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=running-upgrade-868000 minikube.k8s.io/primary=true
	I1216 12:35:51.918753    6206 ops.go:34] apiserver oom_adj: -16
	I1216 12:35:51.918751    6206 kubeadm.go:1113] duration metric: took 33.005791ms to wait for elevateKubeSystemPrivileges
	I1216 12:35:51.928522    6206 kubeadm.go:394] duration metric: took 4m12.870507667s to StartCluster
	I1216 12:35:51.928541    6206 settings.go:142] acquiring lock: {Name:mk8b3a21b6dc2a47a05d302a72ae4dd9a4679c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:51.928638    6206 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:35:51.929044    6206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:51.929245    6206 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:35:51.929268    6206 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 12:35:51.929305    6206 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-868000"
	I1216 12:35:51.929325    6206 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-868000"
	W1216 12:35:51.929330    6206 addons.go:243] addon storage-provisioner should already be in state true
	I1216 12:35:51.929344    6206 host.go:66] Checking if "running-upgrade-868000" exists ...
	I1216 12:35:51.929367    6206 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-868000"
	I1216 12:35:51.929403    6206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-868000"
	I1216 12:35:51.929534    6206 config.go:182] Loaded profile config "running-upgrade-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:35:51.930535    6206 kapi.go:59] client config for running-upgrade-868000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/running-upgrade-868000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104c82f70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:35:51.930817    6206 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-868000"
	W1216 12:35:51.930822    6206 addons.go:243] addon default-storageclass should already be in state true
	I1216 12:35:51.930829    6206 host.go:66] Checking if "running-upgrade-868000" exists ...
	I1216 12:35:51.932968    6206 out.go:177] * Verifying Kubernetes components...
	I1216 12:35:51.933301    6206 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 12:35:51.936098    6206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 12:35:51.936105    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:35:51.939005    6206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:49.335600    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:49.335640    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:51.943010    6206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:35:51.947019    6206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:35:51.947026    6206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 12:35:51.947033    6206 sshutil.go:53] new ssh client: &{IP:localhost Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/running-upgrade-868000/id_rsa Username:docker}
	I1216 12:35:52.028654    6206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:35:52.033878    6206 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:35:52.033933    6206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:52.037666    6206 api_server.go:72] duration metric: took 108.407334ms to wait for apiserver process to appear ...
	I1216 12:35:52.037675    6206 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:35:52.037681    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:52.053129    6206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 12:35:52.069317    6206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:35:52.411634    6206 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 12:35:52.411646    6206 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 12:35:54.337800    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:54.337841    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:57.039865    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:57.039916    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:59.340195    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:59.340236    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:02.040396    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:02.040445    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:04.342499    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:04.342687    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:04.353658    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:04.353745    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:04.364381    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:04.364476    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:04.374921    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:04.375010    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:04.385189    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:04.385274    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:04.395082    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:04.395152    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:04.406013    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:04.406092    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:04.416755    6375 logs.go:282] 0 containers: []
	W1216 12:36:04.416767    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:04.416842    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:04.427327    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:04.427348    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:04.427355    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:04.431680    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:04.431688    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:04.550516    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:04.550530    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:04.563327    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:04.563341    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:04.574994    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:04.575007    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:04.587037    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:04.587053    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:04.602786    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:04.602797    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:04.620678    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:04.620692    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:04.655972    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:04.655985    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:04.670672    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:04.670682    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:04.685276    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:04.685286    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:04.696444    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:04.696455    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:04.721082    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:04.721104    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:04.759275    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:04.759287    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:04.774226    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:04.774237    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:04.786011    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:04.786023    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:04.797989    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:04.798001    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:07.314024    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:07.040909    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:07.040938    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:12.316318    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:12.316594    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:12.340054    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:12.340139    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:12.352794    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:12.352883    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:12.363622    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:12.363698    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:12.373618    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:12.373708    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:12.383766    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:12.383850    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:12.397521    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:12.397596    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:12.414260    6375 logs.go:282] 0 containers: []
	W1216 12:36:12.414275    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:12.414337    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:12.424898    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:12.424918    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:12.424924    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:12.429739    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:12.429748    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:12.469189    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:12.469203    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:12.484369    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:12.484383    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:12.495365    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:12.495378    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:12.507002    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:12.507013    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:12.543993    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:12.544002    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:12.558112    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:12.558127    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:12.581915    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:12.581924    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:12.606598    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:12.606609    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:12.618270    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:12.618282    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:12.629958    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:12.629969    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:12.645661    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:12.645672    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:12.657767    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:12.657780    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:12.672290    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:12.672299    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:12.683523    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:12.683533    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:12.705139    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:12.705151    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:12.041510    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:12.041545    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:15.222534    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:17.042695    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:17.042738    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:22.043838    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:22.043927    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 12:36:22.414325    6206 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 12:36:22.418503    6206 out.go:177] * Enabled addons: storage-provisioner
	I1216 12:36:20.224980    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:20.225514    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:20.259129    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:20.259286    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:20.279500    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:20.279605    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:20.294335    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:20.294424    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:20.306891    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:20.306972    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:20.319145    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:20.319225    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:20.330300    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:20.330377    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:20.341242    6375 logs.go:282] 0 containers: []
	W1216 12:36:20.341255    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:20.341328    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:20.358558    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:20.358583    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:20.358589    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:20.382723    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:20.382733    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:20.394106    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:20.394117    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:20.406512    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:20.406525    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:20.421863    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:20.421874    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:20.438988    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:20.438999    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:20.450792    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:20.450803    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:20.475731    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:20.475740    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:20.513343    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:20.513353    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:20.528019    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:20.528032    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:20.541964    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:20.541973    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:20.553275    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:20.553289    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:20.570699    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:20.570709    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:20.584077    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:20.584091    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:20.595506    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:20.595517    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:20.599947    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:20.599954    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:20.633688    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:20.633702    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:23.150242    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:22.426666    6206 addons.go:510] duration metric: took 30.497137917s for enable addons: enabled=[storage-provisioner]
	I1216 12:36:28.152598    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:28.152754    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:28.163296    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:28.163379    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:28.173680    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:28.173753    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:28.184395    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:28.184470    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:28.195077    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:28.195160    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:28.205469    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:28.205550    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:28.215659    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:28.215736    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:28.225840    6375 logs.go:282] 0 containers: []
	W1216 12:36:28.225854    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:28.225921    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:28.237466    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:28.237484    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:28.237489    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:28.250824    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:28.250839    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:28.287234    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:28.287246    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:28.299176    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:28.299188    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:28.314094    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:28.314105    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:28.339654    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:28.339663    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:28.351217    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:28.351229    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:28.365145    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:28.365156    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:28.379161    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:28.379171    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:28.390873    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:28.390883    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:28.402714    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:28.402723    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:28.414936    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:28.414947    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:28.426348    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:28.426360    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:28.440184    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:28.440194    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:28.457198    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:28.457211    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:28.497036    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:28.497052    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:28.501918    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:28.501924    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:27.045465    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:27.045510    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:31.028497    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:32.047318    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:32.047363    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:36.031128    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:36.031314    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:36.049654    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:36.049766    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:36.063512    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:36.063600    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:36.075317    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:36.075403    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:36.086424    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:36.086516    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:36.096644    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:36.096719    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:36.107067    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:36.107144    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:36.116898    6375 logs.go:282] 0 containers: []
	W1216 12:36:36.116911    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:36.116978    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:36.127050    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:36.127069    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:36.127074    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:36.131174    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:36.131183    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:36.166059    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:36.166071    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:36.198640    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:36.198653    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:36.210133    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:36.210146    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:36.227377    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:36.227386    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:36.253209    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:36.253217    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:36.292445    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:36.292457    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:36.307785    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:36.307797    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:36.321754    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:36.321768    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:36.333791    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:36.333802    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:36.345924    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:36.345934    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:36.366782    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:36.366798    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:36.378914    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:36.378928    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:36.394320    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:36.394331    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:36.409665    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:36.409676    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:36.423863    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:36.423873    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:37.048343    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:37.048396    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:38.936899    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:42.050674    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:42.050701    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:43.939197    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:43.939477    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:43.963551    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:43.963694    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:43.980355    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:43.980452    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:43.993528    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:43.993604    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:44.005182    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:44.005266    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:44.016094    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:44.016177    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:44.027184    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:44.027255    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:44.038153    6375 logs.go:282] 0 containers: []
	W1216 12:36:44.038165    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:44.038232    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:44.049473    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:44.049492    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:44.049518    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:44.064228    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:44.064240    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:44.077927    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:44.077939    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:44.089791    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:44.089802    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:44.102092    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:44.102105    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:44.133828    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:44.133841    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:44.149533    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:44.149549    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:44.165190    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:44.165201    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:44.178096    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:44.178108    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:44.216225    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:44.216233    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:44.220176    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:44.220182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:44.257606    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:44.257618    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:44.275390    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:44.275401    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:44.293704    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:44.293719    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:44.305014    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:44.305025    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:44.316810    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:44.316824    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:44.332671    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:44.332681    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:46.857983    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:47.052976    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:47.053016    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:51.860440    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:51.860634    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:51.874705    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:51.874791    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:51.889111    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:51.889196    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:51.899503    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:51.899587    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:51.910386    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:51.910462    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:51.921250    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:51.921320    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:51.931964    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:51.932035    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:51.942012    6375 logs.go:282] 0 containers: []
	W1216 12:36:51.942028    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:51.942104    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:51.952643    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:51.952661    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:51.952666    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:51.957290    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:51.957300    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:51.981691    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:51.981706    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:51.999044    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:51.999057    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:52.017001    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:52.017012    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:52.046025    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:52.046037    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:52.061510    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:52.061523    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:52.077306    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:52.077317    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:52.090470    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:52.090482    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:52.117090    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:52.117110    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:52.158586    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:52.158598    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:52.171687    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:52.171695    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:52.189066    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:52.189081    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:52.200941    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:52.200953    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:52.238558    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:52.238570    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:52.255295    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:52.255313    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:52.271909    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:52.271926    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:52.055332    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:52.055504    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:52.078585    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:36:52.078660    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:52.095291    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:36:52.095368    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:52.108278    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:36:52.108350    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:52.120314    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:36:52.120390    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:52.132234    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:36:52.132315    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:52.144114    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:36:52.144203    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:52.156000    6206 logs.go:282] 0 containers: []
	W1216 12:36:52.156011    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:52.156073    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:52.171188    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:36:52.171204    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:52.171211    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:52.175961    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:52.175972    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:52.216619    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:36:52.216631    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:36:52.233058    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:36:52.233073    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:36:52.247983    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:36:52.247996    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:36:52.263768    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:36:52.263782    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:36:52.287306    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:36:52.287318    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:36:52.310258    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:36:52.310271    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:52.321522    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:52.321536    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:52.360518    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:36:52.360529    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:36:52.372124    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:36:52.372136    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:36:52.383343    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:36:52.383354    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:36:52.400059    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:52.400068    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:54.787925    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:54.927616    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:59.789798    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:59.790009    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:59.806394    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:59.806497    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:59.819404    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:59.819488    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:59.830640    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:59.830720    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:59.841691    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:59.841771    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:59.851971    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:59.852048    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:59.862013    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:59.862080    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:59.872205    6375 logs.go:282] 0 containers: []
	W1216 12:36:59.872217    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:59.872283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:59.883447    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:59.883465    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:59.883470    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:59.899024    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:59.899037    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:59.920908    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:59.920919    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:59.933002    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:59.933013    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:59.946012    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:59.946024    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:59.984271    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:59.984286    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:00.010786    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:00.010805    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:00.026151    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:00.026160    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:00.042925    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:00.042941    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:00.056885    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:00.056902    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:00.098143    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:00.098151    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:00.102972    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:00.102984    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:00.116053    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:00.116066    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:00.128126    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:00.128138    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:00.141545    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:00.141554    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:00.158521    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:00.158532    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:00.183705    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:00.183716    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:02.711201    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:59.930257    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:59.930361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:59.941843    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:36:59.941925    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:59.953113    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:36:59.953195    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:59.968246    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:36:59.968325    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:59.979348    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:36:59.979428    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:59.991116    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:36:59.991200    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:00.003140    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:00.003224    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:00.014225    6206 logs.go:282] 0 containers: []
	W1216 12:37:00.014238    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:00.014310    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:00.025462    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:00.025478    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:00.025484    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:00.037896    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:00.037911    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:00.074256    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:00.074268    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:00.086605    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:00.086621    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:00.098072    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:00.098086    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:00.115468    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:00.115482    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:00.140309    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:00.140327    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:00.159017    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:00.159026    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:00.170868    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:00.170881    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:00.209120    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:00.209130    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:00.213663    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:00.213673    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:00.228642    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:00.228654    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:00.242841    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:00.242852    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:02.756475    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:07.713655    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:07.713823    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:07.729924    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:07.730011    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:07.742449    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:07.742534    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:07.776320    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:07.776403    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:07.790087    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:07.790169    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:07.802207    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:07.802290    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:07.817823    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:07.817907    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:07.834480    6375 logs.go:282] 0 containers: []
	W1216 12:37:07.834492    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:07.834553    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:07.852660    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:07.852674    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:07.852679    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:07.891718    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:07.891733    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:07.908618    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:07.908632    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:07.923316    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:07.923330    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:07.935881    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:07.935895    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:07.962378    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:07.962389    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:07.984819    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:07.984828    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:08.004381    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:08.004398    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:08.016628    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:08.016640    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:08.021116    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:08.021127    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:08.060304    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:08.060320    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:08.080226    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:08.080241    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:08.095129    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:08.095139    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:08.106807    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:08.106819    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:08.123373    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:08.123387    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:08.134892    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:08.134908    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:08.146394    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:08.146405    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:07.758695    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:07.758795    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:07.770880    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:07.770956    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:07.782228    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:07.782301    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:07.793321    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:07.793400    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:07.806549    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:07.806621    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:07.818257    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:07.818300    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:07.829539    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:07.829618    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:07.840259    6206 logs.go:282] 0 containers: []
	W1216 12:37:07.840272    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:07.840343    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:07.851343    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:07.851359    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:07.851365    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:07.855971    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:07.855981    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:07.871926    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:07.871938    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:07.885323    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:07.885336    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:07.897716    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:07.897730    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:07.913592    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:07.913605    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:07.932388    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:07.932407    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:07.945424    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:07.945437    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:07.984619    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:07.984631    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:08.022197    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:08.022206    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:08.037620    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:08.037633    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:08.049802    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:08.049816    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:08.062644    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:08.062655    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:10.673571    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:10.589461    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:15.676217    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:15.676283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:15.688251    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:15.688324    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:15.699567    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:15.699647    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:15.711160    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:15.711241    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:15.723506    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:15.723632    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:15.735018    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:15.735097    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:15.746896    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:15.746982    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:15.758017    6375 logs.go:282] 0 containers: []
	W1216 12:37:15.758030    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:15.758101    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:15.770270    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:15.770290    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:15.770296    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:15.809906    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:15.809920    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:15.822723    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:15.822736    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:15.836480    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:15.836497    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:15.877173    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:15.877188    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:15.896163    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:15.896174    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:15.912172    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:15.912183    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:15.924151    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:15.924163    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:15.928497    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:15.928503    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:15.941916    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:15.941923    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:15.956686    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:15.956700    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:15.972598    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:15.972613    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:15.987534    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:15.987544    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:16.007563    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:16.007577    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:16.019212    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:16.019226    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:16.036395    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:16.036405    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:16.060892    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:16.060899    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:18.586329    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:15.592166    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:15.592392    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:15.612669    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:15.612783    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:15.627113    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:15.627195    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:15.639901    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:15.639984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:15.650665    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:15.650749    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:15.662959    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:15.663048    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:15.673427    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:15.673505    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:15.684951    6206 logs.go:282] 0 containers: []
	W1216 12:37:15.684967    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:15.685041    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:15.696640    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:15.696659    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:15.696665    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:15.709242    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:15.709256    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:15.722000    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:15.722014    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:15.737822    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:15.737835    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:15.758502    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:15.758514    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:15.771088    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:15.771098    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:15.812215    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:15.812224    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:15.827234    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:15.827247    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:15.842455    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:15.842467    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:15.860936    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:15.860947    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:15.888135    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:15.888149    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:15.901061    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:15.901077    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:15.941744    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:15.941757    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:18.449267    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:23.588660    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:23.588766    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:23.600027    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:23.600108    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:23.614497    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:23.614583    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:23.627113    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:23.627190    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:23.639408    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:23.639493    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:23.651182    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:23.651262    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:23.451626    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:23.451809    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:23.464364    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:23.464448    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:23.474803    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:23.474884    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:23.485180    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:23.485259    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:23.496454    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:23.496534    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:23.507406    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:23.507491    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:23.517732    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:23.517813    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:23.528213    6206 logs.go:282] 0 containers: []
	W1216 12:37:23.528227    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:23.528308    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:23.539931    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:23.539947    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:23.539952    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:23.579416    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:23.579452    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:23.616199    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:23.616209    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:23.635499    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:23.635512    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:23.647871    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:23.647884    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:23.660643    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:23.660657    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:23.685019    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:23.685032    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:23.697841    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:23.697855    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:23.702730    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:23.702741    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:23.717749    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:23.717760    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:23.732031    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:23.732045    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:23.751135    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:23.751148    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:23.763517    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:23.763527    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:23.662293    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:23.662382    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:23.677737    6375 logs.go:282] 0 containers: []
	W1216 12:37:23.677750    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:23.677821    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:23.689835    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:23.689854    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:23.689860    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:23.716896    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:23.716912    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:23.750329    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:23.750340    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:23.762589    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:23.762600    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:23.775048    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:23.775061    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:23.779928    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:23.779936    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:23.797181    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:23.797192    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:23.809546    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:23.809557    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:23.821695    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:23.821710    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:23.859179    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:23.859190    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:23.870698    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:23.870709    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:23.884201    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:23.884212    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:23.902151    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:23.902164    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:23.919172    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:23.919182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:23.942574    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:23.942582    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:23.975926    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:23.975937    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:23.989999    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:23.990008    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:26.506809    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:26.292370    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:31.509135    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:31.509239    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:31.520638    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:31.520723    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:31.535072    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:31.535153    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:31.546679    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:31.546763    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:31.558273    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:31.558358    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:31.569783    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:31.569864    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:31.581173    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:31.581256    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:31.592108    6375 logs.go:282] 0 containers: []
	W1216 12:37:31.592118    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:31.592190    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:31.604265    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:31.604285    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:31.604291    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:31.642990    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:31.643004    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:31.654729    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:31.654742    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:31.667565    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:31.667577    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:31.694066    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:31.694077    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:31.708552    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:31.708562    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:31.720669    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:31.720679    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:31.735569    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:31.735581    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:31.770456    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:31.770469    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:31.784504    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:31.784518    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:31.802007    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:31.802020    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:31.817360    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:31.817370    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:31.828932    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:31.828945    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:31.853587    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:31.853598    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:31.865858    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:31.865871    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:31.871145    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:31.871157    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:31.886245    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:31.886259    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:31.294661    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:31.294913    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:31.320986    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:31.321096    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:31.335183    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:31.335271    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:31.347413    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:31.347488    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:31.360199    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:31.360277    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:31.372110    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:31.372189    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:31.382667    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:31.382749    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:31.393260    6206 logs.go:282] 0 containers: []
	W1216 12:37:31.393274    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:31.393340    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:31.403693    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:31.403717    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:31.403722    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:31.414930    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:31.414941    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:31.439405    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:31.439417    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:31.477168    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:31.477178    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:31.519092    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:31.519103    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:31.531693    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:31.531708    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:31.550565    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:31.550580    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:31.566356    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:31.566372    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:31.579538    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:31.579551    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:31.591879    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:31.591892    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:31.597292    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:31.597304    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:31.612685    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:31.612697    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:31.627012    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:31.627024    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:34.400599    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:34.140927    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:39.402915    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:39.403026    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:39.415538    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:39.415614    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:39.427336    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:39.427421    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:39.438811    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:39.438897    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:39.449993    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:39.450068    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:39.461591    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:39.461670    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:39.472629    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:39.472699    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:39.482947    6375 logs.go:282] 0 containers: []
	W1216 12:37:39.482959    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:39.483027    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:39.494816    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:39.494833    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:39.494839    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:39.534778    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:39.534791    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:39.559817    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:39.559830    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:39.574723    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:39.574736    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:39.586001    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:39.586012    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:39.598312    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:39.598323    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:39.612168    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:39.612180    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:39.626902    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:39.626914    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:39.643818    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:39.643831    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:39.658169    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:39.658182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:39.682990    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:39.682998    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:39.718164    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:39.718175    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:39.732108    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:39.732122    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:39.744370    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:39.744383    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:39.759959    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:39.759970    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:39.776386    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:39.776400    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:39.780404    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:39.780411    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:42.292859    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:39.143467    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:39.143963    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:39.178975    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:39.179141    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:39.197496    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:39.197599    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:39.213942    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:39.214028    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:39.226368    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:39.226460    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:39.241040    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:39.241125    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:39.252413    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:39.252497    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:39.263660    6206 logs.go:282] 0 containers: []
	W1216 12:37:39.263673    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:39.263750    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:39.273982    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:39.273997    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:39.274005    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:39.311704    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:39.311720    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:39.324156    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:39.324172    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:39.341272    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:39.341283    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:39.354129    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:39.354140    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:39.366788    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:39.366802    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:39.371194    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:39.371206    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:39.407113    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:39.407126    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:39.422394    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:39.422406    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:39.437237    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:39.437249    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:39.452535    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:39.452545    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:39.481034    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:39.481046    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:39.507404    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:39.507419    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:42.021556    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:47.293190    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:47.293289    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:47.308156    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:47.308247    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:47.319769    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:47.319852    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:47.331318    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:47.331402    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:47.343133    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:47.343220    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:47.358356    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:47.358430    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:47.371888    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:47.371960    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:47.382241    6375 logs.go:282] 0 containers: []
	W1216 12:37:47.382252    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:47.382314    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:47.392936    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:47.392958    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:47.392968    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:47.408479    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:47.408491    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:47.420702    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:47.420716    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:47.459221    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:47.459231    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:47.463449    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:47.463456    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:47.474771    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:47.474783    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:47.497006    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:47.497012    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:47.513240    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:47.513254    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:47.538396    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:47.538407    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:47.549580    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:47.549592    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:47.560813    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:47.560824    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:47.578034    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:47.578045    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:47.591822    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:47.591835    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:47.629561    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:47.629576    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:47.644520    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:47.644537    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:47.659478    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:47.659488    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:47.671173    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:47.671184    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:47.023938    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:47.024159    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:47.041645    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:47.041740    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:47.056643    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:47.056724    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:47.067830    6206 logs.go:282] 2 containers: [913aa0aa8c39 bf6b78109554]
	I1216 12:37:47.067909    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:47.078908    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:47.078984    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:47.089000    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:47.089079    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:47.100532    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:47.100609    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:47.110680    6206 logs.go:282] 0 containers: []
	W1216 12:37:47.110694    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:47.110757    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:47.122059    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:47.122075    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:47.122080    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:47.138166    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:47.138181    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:47.159424    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:47.159437    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:47.197714    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:47.197724    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:47.202314    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:47.202321    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:47.239825    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:47.239837    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:47.258819    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:47.258830    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:47.275244    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:47.275254    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:47.300468    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:47.300482    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:47.315642    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:47.315655    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:47.331006    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:47.331022    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:47.344259    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:47.344269    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:47.356175    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:47.356187    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:50.187508    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:49.877203    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:55.189899    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:55.190008    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:55.202092    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:55.202174    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:55.213090    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:55.213176    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:55.224776    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:55.224858    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:55.239115    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:55.239193    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:55.249841    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:55.249918    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:55.262244    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:55.262322    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:55.272111    6375 logs.go:282] 0 containers: []
	W1216 12:37:55.272125    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:55.272196    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:55.282962    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:55.282979    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:55.282985    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:55.322876    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:55.322890    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:55.327236    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:55.327245    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:55.360680    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:55.360690    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:55.376695    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:55.376705    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:55.393348    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:55.393360    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:55.405463    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:55.405473    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:55.416952    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:55.416963    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:55.431521    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:55.431532    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:55.443108    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:55.443120    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:55.456464    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:55.456474    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:55.468698    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:55.468710    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:55.480650    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:55.480664    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:55.509339    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:55.509350    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:55.523621    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:55.523635    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:55.540357    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:55.540371    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:55.553207    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:55.553216    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:58.077223    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:54.879564    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:54.879738    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:54.895736    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:37:54.895825    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:54.908638    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:37:54.908718    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:54.919883    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:37:54.919966    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:54.931449    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:37:54.931523    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:54.942481    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:37:54.942551    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:54.953406    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:37:54.953486    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:54.964492    6206 logs.go:282] 0 containers: []
	W1216 12:37:54.964508    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:54.964580    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:54.976148    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:37:54.976166    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:54.976172    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:55.013131    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:37:55.013144    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:37:55.025364    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:37:55.025381    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:37:55.044392    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:37:55.044403    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:37:55.060293    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:37:55.060305    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:37:55.072114    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:55.072125    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:55.108866    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:37:55.108874    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:37:55.123778    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:37:55.123790    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:37:55.135059    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:37:55.135070    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:37:55.147045    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:37:55.147059    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:37:55.164640    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:37:55.164653    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:55.177266    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:55.177276    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:55.182131    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:37:55.182138    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:37:55.195162    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:37:55.195177    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:37:55.215739    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:55.215750    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:57.743313    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:03.079536    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:03.079674    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:03.092404    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:03.092485    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:03.104198    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:03.104274    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:03.121031    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:03.121105    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:03.132167    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:03.132252    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:03.142777    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:03.142853    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:03.153244    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:03.153324    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:03.165048    6375 logs.go:282] 0 containers: []
	W1216 12:38:03.165060    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:03.165123    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:03.175226    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:03.175243    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:03.175248    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:03.188500    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:03.188511    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:03.200031    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:03.200042    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:03.211890    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:03.211900    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:03.235815    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:03.235825    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:03.248352    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:03.248366    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:03.262308    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:03.262319    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:03.287316    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:03.287328    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:03.298398    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:03.298409    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:03.310446    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:03.310457    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:03.324200    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:03.324213    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:03.340785    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:03.340795    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:03.357178    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:03.357193    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:03.396092    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:03.396108    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:03.400480    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:03.400490    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:03.436407    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:03.436418    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:03.450882    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:03.450895    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:02.743875    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:02.744132    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:02.764924    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:02.765035    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:02.780484    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:02.780564    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:02.793129    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:02.793219    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:02.805197    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:02.805270    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:02.816722    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:02.816792    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:02.827914    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:02.827986    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:02.839216    6206 logs.go:282] 0 containers: []
	W1216 12:38:02.839226    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:02.839285    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:02.850376    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:02.850396    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:02.850402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:02.863282    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:02.863293    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:02.867625    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:02.867635    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:02.881612    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:02.881623    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:02.894055    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:02.894071    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:02.912564    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:02.912579    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:02.930221    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:02.930232    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:02.942731    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:02.942741    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:02.955771    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:02.955783    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:02.967205    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:02.967216    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:02.979064    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:02.979074    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:02.990881    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:02.990891    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:03.015801    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:03.015809    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:03.051843    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:03.051853    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:03.090208    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:03.090223    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:05.970151    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:05.608643    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:10.972449    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:10.972547    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:10.984527    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:10.984602    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:10.995775    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:10.995860    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:11.007017    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:11.007096    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:11.018161    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:11.018249    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:11.028827    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:11.028902    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:11.039295    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:11.039365    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:11.049905    6375 logs.go:282] 0 containers: []
	W1216 12:38:11.049917    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:11.049984    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:11.060272    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:11.060292    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:11.060297    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:11.074172    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:11.074181    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:11.099955    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:11.099965    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:11.111465    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:11.111477    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:11.128128    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:11.128138    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:11.140060    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:11.140069    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:11.163583    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:11.163590    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:11.202236    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:11.202248    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:11.238651    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:11.238665    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:11.252582    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:11.252596    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:11.264129    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:11.264140    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:11.279333    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:11.279344    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:11.283659    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:11.283665    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:11.301221    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:11.301231    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:11.313110    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:11.313122    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:11.328152    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:11.328163    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:11.340126    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:11.340138    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:10.611035    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:10.611444    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:10.644775    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:10.644903    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:10.665140    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:10.665242    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:10.680636    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:10.680727    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:10.693512    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:10.693594    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:10.704679    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:10.704760    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:10.716237    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:10.716312    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:10.727315    6206 logs.go:282] 0 containers: []
	W1216 12:38:10.727329    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:10.727398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:10.743252    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:10.743269    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:10.743276    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:10.748207    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:10.748215    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:10.790361    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:10.790391    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:10.803609    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:10.803622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:10.818904    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:10.818917    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:10.830998    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:10.831010    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:10.854518    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:10.854525    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:10.890677    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:10.890687    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:10.905827    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:10.905839    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:10.920336    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:10.920346    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:10.932514    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:10.932528    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:10.944775    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:10.944785    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:10.963642    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:10.963655    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:10.976224    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:10.976235    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:10.992259    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:10.992271    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:13.506914    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:13.855378    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:18.509338    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:18.509593    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:18.530974    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:18.531085    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:18.546996    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:18.547084    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:18.559694    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:18.559775    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:18.571108    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:18.571189    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:18.582505    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:18.582591    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:18.594159    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:18.594237    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:18.605870    6206 logs.go:282] 0 containers: []
	W1216 12:38:18.605881    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:18.605942    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:18.618538    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:18.618555    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:18.618563    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:18.657468    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:18.657478    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:18.702364    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:18.702376    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:18.714284    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:18.714296    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:18.729856    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:18.729869    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:18.742634    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:18.742650    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:18.747113    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:18.747120    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:18.759369    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:18.759379    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:18.783851    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:18.783864    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:18.798515    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:18.798529    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:18.810737    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:18.810749    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:18.822652    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:18.822663    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:18.841310    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:18.841322    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:18.855970    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:18.856088    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:18.870176    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:18.870263    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:18.883058    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:18.883151    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:18.894495    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:18.894576    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:18.904502    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:18.904581    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:18.915103    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:18.915184    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:18.925892    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:18.925969    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:18.936342    6375 logs.go:282] 0 containers: []
	W1216 12:38:18.936356    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:18.936418    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:18.947131    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:18.947148    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:18.947154    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:18.961973    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:18.961985    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:18.973706    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:18.973717    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:18.989002    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:18.989014    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:19.006284    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:19.006294    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:19.019517    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:19.019528    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:19.032921    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:19.032932    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:19.044335    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:19.044344    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:19.055276    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:19.055288    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:19.059834    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:19.059843    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:19.094373    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:19.094385    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:19.118827    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:19.118838    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:19.130606    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:19.130617    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:19.153292    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:19.153300    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:19.189656    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:19.189663    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:19.204325    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:19.204337    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:19.215864    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:19.215878    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:21.730226    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:18.855530    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:18.855540    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:18.872392    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:18.872402    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:21.387879    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:26.732807    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:26.732916    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:26.744780    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:26.744872    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:26.765976    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:26.766058    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:26.777260    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:26.777341    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:26.787788    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:26.787865    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:26.798730    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:26.798811    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:26.809352    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:26.809438    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:26.819179    6375 logs.go:282] 0 containers: []
	W1216 12:38:26.819192    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:26.819257    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:26.835627    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:26.835646    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:26.835651    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:26.850249    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:26.850259    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:26.862995    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:26.863010    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:26.878597    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:26.878607    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:26.891688    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:26.891702    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:26.903832    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:26.903844    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:26.917761    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:26.917771    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:26.931973    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:26.931987    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:26.943804    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:26.943821    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:26.955699    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:26.955709    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:26.973678    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:26.973689    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:27.012065    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:27.012073    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:27.016500    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:27.016505    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:27.050330    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:27.050342    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:27.069857    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:27.069868    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:27.094389    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:27.094400    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:27.106399    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:27.106414    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:26.390261    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:26.390541    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:26.414057    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:26.414185    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:26.430693    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:26.430773    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:26.444092    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:26.444181    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:26.458993    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:26.459065    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:26.469575    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:26.469654    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:26.483195    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:26.483270    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:26.493670    6206 logs.go:282] 0 containers: []
	W1216 12:38:26.493684    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:26.493752    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:26.508468    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:26.508488    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:26.508493    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:26.545732    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:26.545742    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:26.558654    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:26.558672    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:26.572699    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:26.572716    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:26.584811    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:26.584825    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:26.596537    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:26.596551    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:26.601465    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:26.601471    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:26.637306    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:26.637318    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:26.651315    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:26.651326    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:26.664082    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:26.664094    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:26.693610    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:26.693628    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:26.708531    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:26.708543    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:26.721497    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:26.721510    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:26.734315    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:26.734325    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:26.747069    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:26.747085    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:29.631290    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:29.273719    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:34.633599    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:34.633711    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:34.644288    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:34.644368    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:34.658710    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:34.658779    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:34.669336    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:34.669417    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:34.682645    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:34.682718    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:34.701393    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:34.701471    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:34.712430    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:34.712501    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:34.722746    6375 logs.go:282] 0 containers: []
	W1216 12:38:34.722757    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:34.722819    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:34.733442    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:34.733463    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:34.733469    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:34.747709    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:34.747719    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:34.765263    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:34.765274    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:34.769464    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:34.769471    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:34.781031    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:34.781043    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:34.792858    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:34.792872    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:34.804473    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:34.804484    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:34.822041    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:34.822054    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:34.833764    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:34.833775    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:34.845700    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:34.845710    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:34.870120    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:34.870131    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:34.907559    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:34.907572    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:34.951811    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:34.951822    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:34.966211    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:34.966221    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:34.980218    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:34.980230    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:34.995449    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:34.995459    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:35.007138    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:35.007150    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:37.530176    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:34.276120    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:34.276268    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:34.289799    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:34.289888    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:34.309699    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:34.309783    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:34.321897    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:34.321973    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:34.332485    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:34.332550    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:34.343160    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:34.343224    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:34.353480    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:34.353563    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:34.363430    6206 logs.go:282] 0 containers: []
	W1216 12:38:34.363441    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:34.363511    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:34.373558    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:34.373578    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:34.373584    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:34.385889    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:34.385900    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:34.399732    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:34.399744    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:34.413920    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:34.413932    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:34.425273    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:34.425286    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:34.443107    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:34.443117    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:34.457567    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:34.457579    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:34.475735    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:34.475747    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:34.497629    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:34.497641    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:34.521653    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:34.521663    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:34.557904    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:34.557912    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:34.562241    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:34.562250    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:34.573993    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:34.574004    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:34.607983    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:34.607994    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:34.619722    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:34.619732    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:37.133723    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:42.532491    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:42.532680    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:42.544174    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:42.544264    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:42.554606    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:42.554685    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:42.564882    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:42.564959    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:42.576011    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:42.576086    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:42.591133    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:42.591209    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:42.602217    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:42.602283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:42.612062    6375 logs.go:282] 0 containers: []
	W1216 12:38:42.612073    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:42.612137    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:42.623790    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:42.623815    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:42.623822    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:42.660147    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:42.660154    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:42.695398    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:42.695412    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:42.710696    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:42.710707    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:42.721815    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:42.721825    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:42.734055    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:42.734066    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:42.749533    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:42.749544    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:42.770275    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:42.770285    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:42.794049    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:42.794059    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:42.809477    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:42.809488    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:42.820990    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:42.821004    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:42.825102    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:42.825110    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:42.850718    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:42.850735    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:42.863734    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:42.863745    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:42.875563    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:42.875575    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:42.888148    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:42.888160    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:42.902947    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:42.902961    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:42.136439    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:42.136640    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:42.154261    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:42.154361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:42.167479    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:42.167560    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:42.178670    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:42.178751    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:42.189091    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:42.189160    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:42.199646    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:42.199716    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:42.209678    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:42.209748    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:42.220086    6206 logs.go:282] 0 containers: []
	W1216 12:38:42.220097    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:42.220162    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:42.230542    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:42.230561    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:42.230568    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:42.245678    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:42.245688    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:42.284745    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:42.284755    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:42.298005    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:42.298016    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:42.319366    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:42.319379    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:42.330980    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:42.330993    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:42.342834    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:42.342848    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:42.367645    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:42.367656    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:42.372409    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:42.372417    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:42.411135    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:42.411149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:42.422558    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:42.422569    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:42.437165    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:42.437176    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:42.455688    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:42.455699    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:42.468131    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:42.468143    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:42.482955    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:42.482965    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:45.418119    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:44.999792    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:50.420569    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:50.420728    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:50.432524    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:50.432606    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:50.443542    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:50.443628    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:50.454190    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:50.454271    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:50.464959    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:50.465032    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:50.476141    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:50.476226    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:50.487642    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:50.487721    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:50.498081    6375 logs.go:282] 0 containers: []
	W1216 12:38:50.498091    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:50.498154    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:50.508749    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:50.508765    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:50.508772    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:50.522423    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:50.522435    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:50.534246    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:50.534258    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:50.557482    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:50.557489    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:50.572035    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:50.572044    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:50.609428    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:50.609435    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:50.613499    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:50.613506    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:50.627089    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:50.627099    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:50.638555    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:50.638566    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:50.655898    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:50.655908    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:50.667681    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:50.667692    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:50.682186    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:50.682197    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:50.706534    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:50.706545    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:50.720670    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:50.720681    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:50.742630    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:50.742640    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:50.780256    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:50.780270    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:50.801374    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:50.801390    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:53.315055    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:50.002485    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:50.002693    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:50.021945    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:50.022050    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:50.053077    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:50.053155    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:50.071298    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:50.071384    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:50.083544    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:50.083610    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:50.093764    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:50.093831    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:50.104556    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:50.104633    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:50.115134    6206 logs.go:282] 0 containers: []
	W1216 12:38:50.115146    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:50.115217    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:50.125824    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:50.125842    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:50.125847    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:50.140687    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:50.140696    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:50.155946    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:50.155958    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:50.167396    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:50.167408    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:50.202204    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:50.202219    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:50.214016    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:50.214032    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:50.225203    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:50.225214    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:50.262591    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:50.262598    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:50.267012    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:50.267018    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:50.280826    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:50.280837    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:50.292335    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:50.292350    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:50.316073    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:50.316088    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:50.330193    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:50.330207    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:50.342484    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:50.342495    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:50.354017    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:50.354030    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:52.876373    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:58.317379    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:58.317482    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:58.333275    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:58.333357    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:58.346443    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:58.346522    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:58.356494    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:58.356565    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:58.367577    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:58.367656    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:58.378352    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:58.378427    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:58.388899    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:58.388975    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:58.399441    6375 logs.go:282] 0 containers: []
	W1216 12:38:58.399451    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:58.399513    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:58.409632    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:58.409650    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:58.409656    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:58.429290    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:58.429300    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:58.468758    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:58.468770    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:58.490747    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:58.490759    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:58.505301    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:58.505311    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:58.517350    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:58.517362    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:58.531331    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:58.531341    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:58.543690    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:58.543702    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:58.568759    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:58.568771    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:58.584648    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:58.584660    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:58.596514    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:58.596524    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:58.608154    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:58.608164    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:58.619258    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:58.619271    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:58.642298    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:58.642306    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:57.878783    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:57.879027    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:57.897763    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:38:57.897862    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:57.912100    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:38:57.912187    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:57.924236    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:38:57.924317    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:57.935006    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:38:57.935077    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:57.945651    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:38:57.945736    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:57.960027    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:38:57.960098    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:57.969960    6206 logs.go:282] 0 containers: []
	W1216 12:38:57.969971    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:57.970035    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:57.980312    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:38:57.980330    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:57.980337    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:57.985342    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:38:57.985348    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:57.996544    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:57.996560    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:58.033461    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:38:58.033472    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:38:58.046691    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:58.046703    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:58.085285    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:38:58.085301    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:38:58.097788    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:38:58.097800    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:38:58.111817    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:38:58.111832    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:38:58.123441    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:38:58.123455    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:38:58.137845    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:38:58.137858    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:38:58.152957    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:38:58.152967    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:38:58.164358    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:38:58.164371    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:38:58.175909    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:38:58.175920    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:38:58.190351    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:38:58.190362    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:38:58.214933    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:58.214947    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:58.678135    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:58.678149    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:58.696508    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:58.696519    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:58.714389    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:58.714400    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:01.220839    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:00.743001    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:06.223473    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:06.223560    6375 kubeadm.go:597] duration metric: took 4m3.788947042s to restartPrimaryControlPlane
	W1216 12:39:06.223604    6375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 12:39:06.223628    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 12:39:07.285209    6375 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06156075s)
	I1216 12:39:07.285285    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:39:07.290322    6375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:39:07.293082    6375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:39:07.296178    6375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:39:07.296184    6375 kubeadm.go:157] found existing configuration files:
	
	I1216 12:39:07.296212    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf
	I1216 12:39:07.299401    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:39:07.299430    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:39:07.302203    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf
	I1216 12:39:07.304548    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:39:07.304573    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:39:07.307671    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf
	I1216 12:39:07.310537    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:39:07.310589    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:39:07.313500    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf
	I1216 12:39:07.316100    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:39:07.316135    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:39:07.319326    6375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:39:07.335956    6375 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 12:39:07.335999    6375 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:39:07.388293    6375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:39:07.388352    6375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:39:07.388443    6375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:39:07.441949    6375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:39:07.448046    6375 out.go:235]   - Generating certificates and keys ...
	I1216 12:39:07.448082    6375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:39:07.448111    6375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:39:07.448153    6375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:39:07.448185    6375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:39:07.448221    6375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:39:07.448251    6375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:39:07.448285    6375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:39:07.448321    6375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:39:07.448364    6375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:39:07.448395    6375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:39:07.448414    6375 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:39:07.448446    6375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:39:07.490439    6375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:39:07.687701    6375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:39:07.807208    6375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:39:07.888744    6375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:39:07.918264    6375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:39:07.918750    6375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:39:07.918773    6375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:39:07.990035    6375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:39:07.994205    6375 out.go:235]   - Booting up control plane ...
	I1216 12:39:07.994253    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:39:07.994295    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:39:07.994358    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:39:07.994414    6375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:39:07.994549    6375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:39:05.745496    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:05.745738    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:05.766056    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:05.766167    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:05.784486    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:05.784571    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:05.795930    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:05.796031    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:05.806329    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:05.806403    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:05.816329    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:05.816398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:05.831013    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:05.831088    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:05.844349    6206 logs.go:282] 0 containers: []
	W1216 12:39:05.844363    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:05.844429    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:05.854898    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:05.854915    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:05.854921    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:05.869666    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:05.869676    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:05.887326    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:05.887339    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:05.899663    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:05.899678    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:05.914082    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:05.914092    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:05.925678    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:05.925690    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:05.950142    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:05.950155    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:05.954415    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:05.954421    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:05.965432    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:05.965444    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:05.979744    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:05.979754    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:05.991912    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:05.991923    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:06.008013    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:06.008024    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:06.020055    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:06.020065    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:06.031683    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:06.031694    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:06.070388    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:06.070397    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:08.607033    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:12.495273    6375 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501540 seconds
	I1216 12:39:12.495353    6375 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 12:39:12.499086    6375 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 12:39:13.005797    6375 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 12:39:13.005912    6375 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-349000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 12:39:13.509552    6375 kubeadm.go:310] [bootstrap-token] Using token: hrztia.rg8izit14ku9t5ga
	I1216 12:39:13.515743    6375 out.go:235]   - Configuring RBAC rules ...
	I1216 12:39:13.515801    6375 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 12:39:13.515852    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 12:39:13.523751    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 12:39:13.525835    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 12:39:13.526694    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 12:39:13.527438    6375 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 12:39:13.530596    6375 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 12:39:13.609270    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:13.609400    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:13.620756    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:13.620840    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:13.632381    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:13.632468    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:13.643582    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:13.643663    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:13.654216    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:13.654292    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:13.664325    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:13.664405    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:13.678885    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:13.678969    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:13.695180    6206 logs.go:282] 0 containers: []
	W1216 12:39:13.695192    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:13.695265    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:13.707113    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:13.707135    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:13.707142    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:13.729067    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:13.729081    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:13.746016    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:13.746030    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:13.758940    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:13.758954    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:13.778043    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:13.778064    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:13.819776    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:13.819798    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:13.831905    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:13.831919    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:13.846975    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:13.846994    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:13.695460    6375 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 12:39:13.914509    6375 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 12:39:13.915078    6375 kubeadm.go:310] 
	I1216 12:39:13.915114    6375 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 12:39:13.915119    6375 kubeadm.go:310] 
	I1216 12:39:13.915163    6375 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 12:39:13.915169    6375 kubeadm.go:310] 
	I1216 12:39:13.915233    6375 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 12:39:13.915267    6375 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 12:39:13.915354    6375 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 12:39:13.915360    6375 kubeadm.go:310] 
	I1216 12:39:13.915390    6375 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 12:39:13.915417    6375 kubeadm.go:310] 
	I1216 12:39:13.915451    6375 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 12:39:13.915488    6375 kubeadm.go:310] 
	I1216 12:39:13.915516    6375 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 12:39:13.915576    6375 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 12:39:13.915649    6375 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 12:39:13.915663    6375 kubeadm.go:310] 
	I1216 12:39:13.915767    6375 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 12:39:13.915811    6375 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 12:39:13.915814    6375 kubeadm.go:310] 
	I1216 12:39:13.915885    6375 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hrztia.rg8izit14ku9t5ga \
	I1216 12:39:13.915936    6375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f \
	I1216 12:39:13.915952    6375 kubeadm.go:310] 	--control-plane 
	I1216 12:39:13.915955    6375 kubeadm.go:310] 
	I1216 12:39:13.915992    6375 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 12:39:13.915994    6375 kubeadm.go:310] 
	I1216 12:39:13.916033    6375 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hrztia.rg8izit14ku9t5ga \
	I1216 12:39:13.916082    6375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f 
	I1216 12:39:13.916213    6375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:39:13.916246    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:39:13.916254    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:39:13.920605    6375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 12:39:13.927647    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 12:39:13.931390    6375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 12:39:13.937819    6375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 12:39:13.937938    6375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-349000 minikube.k8s.io/updated_at=2024_12_16T12_39_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=stopped-upgrade-349000 minikube.k8s.io/primary=true
	I1216 12:39:13.937969    6375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 12:39:13.943200    6375 ops.go:34] apiserver oom_adj: -16
	I1216 12:39:13.987290    6375 kubeadm.go:1113] duration metric: took 49.449459ms to wait for elevateKubeSystemPrivileges
	I1216 12:39:13.987307    6375 kubeadm.go:394] duration metric: took 4m11.566088709s to StartCluster
	I1216 12:39:13.987319    6375 settings.go:142] acquiring lock: {Name:mk8b3a21b6dc2a47a05d302a72ae4dd9a4679c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:39:13.987417    6375 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:39:13.987868    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:39:13.988069    6375 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:39:13.988092    6375 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 12:39:13.988168    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:39:13.988176    6375 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-349000"
	I1216 12:39:13.988184    6375 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-349000"
	W1216 12:39:13.988187    6375 addons.go:243] addon storage-provisioner should already be in state true
	I1216 12:39:13.988229    6375 host.go:66] Checking if "stopped-upgrade-349000" exists ...
	I1216 12:39:13.988196    6375 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-349000"
	I1216 12:39:13.988247    6375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-349000"
	I1216 12:39:13.990583    6375 out.go:177] * Verifying Kubernetes components...
	I1216 12:39:13.991366    6375 kapi.go:59] client config for stopped-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106cfef70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:39:13.994875    6375 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-349000"
	W1216 12:39:13.994880    6375 addons.go:243] addon default-storageclass should already be in state true
	I1216 12:39:13.994891    6375 host.go:66] Checking if "stopped-upgrade-349000" exists ...
	I1216 12:39:13.995445    6375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 12:39:13.995450    6375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 12:39:13.995455    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:39:13.998633    6375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:39:14.001719    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:39:14.004656    6375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:39:14.004662    6375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 12:39:14.004667    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:39:14.076120    6375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:39:14.081936    6375 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:39:14.082010    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:39:14.085788    6375 api_server.go:72] duration metric: took 97.707041ms to wait for apiserver process to appear ...
	I1216 12:39:14.085796    6375 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:39:14.085804    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:14.092252    6375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 12:39:14.110510    6375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:39:14.463400    6375 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 12:39:14.463413    6375 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 12:39:13.860363    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:13.860377    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:13.871898    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:13.871912    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:13.883831    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:13.883844    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:13.909547    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:13.909566    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:13.921845    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:13.921855    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:13.926656    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:13.926669    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:13.969465    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:13.969478    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:16.486788    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:19.087937    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:19.087983    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:21.489023    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:21.489131    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:21.500854    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:21.500940    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:21.511836    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:21.511924    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:21.522402    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:21.522483    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:21.532813    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:21.532889    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:21.543267    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:21.543350    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:21.553709    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:21.553786    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:21.563897    6206 logs.go:282] 0 containers: []
	W1216 12:39:21.563907    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:21.563970    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:21.574137    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:21.574154    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:21.574160    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:21.578740    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:21.578747    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:21.595131    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:21.595146    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:21.607067    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:21.607078    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:21.645623    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:21.645633    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:21.657241    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:21.657252    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:21.669040    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:21.669050    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:21.680229    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:21.680240    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:21.706046    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:21.706053    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:21.746032    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:21.746044    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:21.760261    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:21.760274    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:21.777609    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:21.777622    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:21.792260    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:21.792273    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:21.811051    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:21.811064    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:21.823158    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:21.823172    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:24.088292    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:24.088334    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:24.336662    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:29.088735    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:29.088763    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:29.338938    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:29.339056    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:29.350756    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:29.350832    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:29.362273    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:29.362361    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:29.373954    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:29.374078    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:29.390550    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:29.390622    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:29.400827    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:29.400911    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:29.413535    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:29.413616    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:29.423654    6206 logs.go:282] 0 containers: []
	W1216 12:39:29.423665    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:29.423732    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:29.433918    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:29.433936    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:29.433942    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:29.448265    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:29.448275    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:29.460244    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:29.460257    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:29.471558    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:29.471569    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:29.496592    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:29.496600    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:29.533929    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:29.533942    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:29.547913    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:29.547923    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:29.559996    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:29.560008    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:29.565062    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:29.565069    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:29.576584    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:29.576595    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:29.615934    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:29.615945    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:29.627983    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:29.627996    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:29.639592    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:29.639603    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:29.658403    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:29.658414    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:29.675550    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:29.675560    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:32.193601    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:34.089255    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:34.089297    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:37.195316    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:37.195429    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:37.207014    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:37.207095    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:37.217327    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:37.217398    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:37.232158    6206 logs.go:282] 4 containers: [6408be651234 857d26c080c8 913aa0aa8c39 bf6b78109554]
	I1216 12:39:37.232239    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:37.243030    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:37.243105    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:37.253548    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:37.253621    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:37.266824    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:37.266904    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:37.277684    6206 logs.go:282] 0 containers: []
	W1216 12:39:37.277699    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:37.277766    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:37.288359    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:37.288377    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:37.288382    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:37.300067    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:37.300082    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:37.325136    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:37.325149    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:37.336985    6206 logs.go:123] Gathering logs for coredns [913aa0aa8c39] ...
	I1216 12:39:37.336997    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 913aa0aa8c39"
	I1216 12:39:37.349398    6206 logs.go:123] Gathering logs for coredns [bf6b78109554] ...
	I1216 12:39:37.349411    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf6b78109554"
	I1216 12:39:37.361356    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:37.361368    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:37.373481    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:37.373494    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:37.392473    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:37.392484    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:37.407751    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:37.407762    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:37.421891    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:37.421907    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:37.437332    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:37.437349    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:37.449286    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:37.449300    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:37.461629    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:37.461641    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:37.498194    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:37.498204    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:37.502579    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:37.502587    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:39.089968    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:39.090010    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:40.040808    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:44.090872    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:44.090929    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 12:39:44.466028    6375 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 12:39:44.470263    6375 out.go:177] * Enabled addons: storage-provisioner
	I1216 12:39:44.477164    6375 addons.go:510] duration metric: took 30.488819125s for enable addons: enabled=[storage-provisioner]
	I1216 12:39:45.041737    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:45.041949    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:39:45.060686    6206 logs.go:282] 1 containers: [2d72cd87e3d8]
	I1216 12:39:45.060805    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:39:45.074527    6206 logs.go:282] 1 containers: [6f91b5d2d6fc]
	I1216 12:39:45.074605    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:39:45.086603    6206 logs.go:282] 4 containers: [67be7aaf65be 63cd7ff1772f 6408be651234 857d26c080c8]
	I1216 12:39:45.086685    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:39:45.097846    6206 logs.go:282] 1 containers: [15f72a877fae]
	I1216 12:39:45.097926    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:39:45.108500    6206 logs.go:282] 1 containers: [bd335ebc69ca]
	I1216 12:39:45.108569    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:39:45.125052    6206 logs.go:282] 1 containers: [cca41d4888dc]
	I1216 12:39:45.125133    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:39:45.134963    6206 logs.go:282] 0 containers: []
	W1216 12:39:45.134975    6206 logs.go:284] No container was found matching "kindnet"
	I1216 12:39:45.135041    6206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:39:45.145616    6206 logs.go:282] 1 containers: [3b922961d012]
	I1216 12:39:45.145633    6206 logs.go:123] Gathering logs for kube-proxy [bd335ebc69ca] ...
	I1216 12:39:45.145638    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd335ebc69ca"
	I1216 12:39:45.157547    6206 logs.go:123] Gathering logs for Docker ...
	I1216 12:39:45.157563    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:39:45.182165    6206 logs.go:123] Gathering logs for coredns [67be7aaf65be] ...
	I1216 12:39:45.182181    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67be7aaf65be"
	I1216 12:39:45.193890    6206 logs.go:123] Gathering logs for coredns [6408be651234] ...
	I1216 12:39:45.193901    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6408be651234"
	I1216 12:39:45.205914    6206 logs.go:123] Gathering logs for coredns [857d26c080c8] ...
	I1216 12:39:45.205926    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857d26c080c8"
	I1216 12:39:45.217741    6206 logs.go:123] Gathering logs for kube-apiserver [2d72cd87e3d8] ...
	I1216 12:39:45.217754    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d72cd87e3d8"
	I1216 12:39:45.232654    6206 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:39:45.232667    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:39:45.267710    6206 logs.go:123] Gathering logs for etcd [6f91b5d2d6fc] ...
	I1216 12:39:45.267721    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f91b5d2d6fc"
	I1216 12:39:45.282311    6206 logs.go:123] Gathering logs for coredns [63cd7ff1772f] ...
	I1216 12:39:45.282320    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63cd7ff1772f"
	I1216 12:39:45.293809    6206 logs.go:123] Gathering logs for container status ...
	I1216 12:39:45.293822    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:39:45.305451    6206 logs.go:123] Gathering logs for dmesg ...
	I1216 12:39:45.305461    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:45.310708    6206 logs.go:123] Gathering logs for kube-scheduler [15f72a877fae] ...
	I1216 12:39:45.310717    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15f72a877fae"
	I1216 12:39:45.325475    6206 logs.go:123] Gathering logs for kube-controller-manager [cca41d4888dc] ...
	I1216 12:39:45.325486    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cca41d4888dc"
	I1216 12:39:45.343283    6206 logs.go:123] Gathering logs for storage-provisioner [3b922961d012] ...
	I1216 12:39:45.343294    6206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b922961d012"
	I1216 12:39:45.355384    6206 logs.go:123] Gathering logs for kubelet ...
	I1216 12:39:45.355399    6206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:39:47.896773    6206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:52.899236    6206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:52.906402    6206 out.go:201] 
	W1216 12:39:52.910301    6206 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1216 12:39:52.910319    6206 out.go:270] * 
	W1216 12:39:52.911692    6206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:39:52.921271    6206 out.go:201] 
	I1216 12:39:49.092099    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:49.092175    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:54.093801    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:54.093853    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:59.094349    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:59.094418    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:04.096512    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:04.096538    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-12-16 20:30:51 UTC, ends at Mon 2024-12-16 20:40:09 UTC. --
	Dec 16 20:39:48 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 20:39:53 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:53Z" level=error msg="ContainerStats resp: {0x4000a85f80 linux}"
	Dec 16 20:39:53 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:53Z" level=error msg="ContainerStats resp: {0x40004e6540 linux}"
	Dec 16 20:39:53 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 20:39:54 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:54Z" level=error msg="ContainerStats resp: {0x4000656cc0 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000790140 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000656400 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000790440 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000790880 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000657580 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000657dc0 linux}"
	Dec 16 20:39:55 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:55Z" level=error msg="ContainerStats resp: {0x4000791640 linux}"
	Dec 16 20:39:58 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:39:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 20:40:03 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 16 20:40:05 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:05Z" level=error msg="ContainerStats resp: {0x4000a84040 linux}"
	Dec 16 20:40:05 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:05Z" level=error msg="ContainerStats resp: {0x4000038280 linux}"
	Dec 16 20:40:06 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:06Z" level=error msg="ContainerStats resp: {0x4000968180 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000039f00 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000969440 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000969740 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000969d00 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000790300 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000790900 linux}"
	Dec 16 20:40:07 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:07Z" level=error msg="ContainerStats resp: {0x4000790d00 linux}"
	Dec 16 20:40:08 running-upgrade-868000 cri-dockerd[3091]: time="2024-12-16T20:40:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	67be7aaf65be7       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   889a5d65b37a9
	63cd7ff1772f9       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   33027a10b7183
	6408be6512347       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   889a5d65b37a9
	857d26c080c88       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   33027a10b7183
	bd335ebc69ca7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   6aae432f90f4c
	3b922961d0122       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   80d2309a13436
	15f72a877fae0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   0c43ef89ac3c2
	cca41d4888dc7       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f4f7140ac2238
	2d72cd87e3d86       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   118e76802f964
	6f91b5d2d6fc1       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c1ff4701ff732
	
	
	==> coredns [63cd7ff1772f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:48434->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:48273->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:39602->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:48327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:60862->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:57699->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1042499370582907985.5274874781558182222. HINFO: read udp 10.244.0.2:42994->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6408be651234] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:33456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:57853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:40772->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:56328->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:54033->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:49710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:53618->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:56339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:50962->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1569654138271678381.4645817059815744838. HINFO: read udp 10.244.0.3:55554->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [67be7aaf65be] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:54612->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:59560->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:50670->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:55936->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:38818->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:36728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7494868237376681356.4557624411148067285. HINFO: read udp 10.244.0.3:50665->10.0.2.3:53: i/o timeout
	
	
	==> coredns [857d26c080c8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:58858->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:49216->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:52855->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:57290->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:32968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:52153->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:49539->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:57480->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:38723->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4852366866527596182.3445978546080198163. HINFO: read udp 10.244.0.2:32782->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-868000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-868000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=running-upgrade-868000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T12_35_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 20:35:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-868000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 20:40:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 20:35:51 +0000   Mon, 16 Dec 2024 20:35:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 20:35:51 +0000   Mon, 16 Dec 2024 20:35:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 20:35:51 +0000   Mon, 16 Dec 2024 20:35:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 20:35:51 +0000   Mon, 16 Dec 2024 20:35:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-868000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	System Info:
	  Machine ID:                 39874fbea17649ed94c0651aae1055fb
	  System UUID:                39874fbea17649ed94c0651aae1055fb
	  Boot ID:                    f6c066d5-be24-4be0-a2ec-0dc44b05028a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nns5v                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-vhq9x                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-868000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-868000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-868000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-t6m6d                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-868000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-868000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-868000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-868000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-868000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-868000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-868000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-868000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-868000 event: Registered Node running-upgrade-868000 in Controller
	
	
	==> dmesg <==
	[  +2.226357] systemd-fstab-generator[880]: Ignoring "noauto" for root device
	[  +0.061768] systemd-fstab-generator[891]: Ignoring "noauto" for root device
	[  +0.058747] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +1.135167] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.074776] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +0.065221] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +2.798240] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.161670] systemd-fstab-generator[1944]: Ignoring "noauto" for root device
	[  +2.361320] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.192945] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.089357] systemd-fstab-generator[2272]: Ignoring "noauto" for root device
	[  +0.098829] systemd-fstab-generator[2285]: Ignoring "noauto" for root device
	[ +12.847409] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.235825] systemd-fstab-generator[3046]: Ignoring "noauto" for root device
	[  +0.090301] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
	[  +0.083508] systemd-fstab-generator[3070]: Ignoring "noauto" for root device
	[  +0.073612] systemd-fstab-generator[3084]: Ignoring "noauto" for root device
	[  +2.761134] systemd-fstab-generator[3238]: Ignoring "noauto" for root device
	[  +2.671555] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[  +2.176485] systemd-fstab-generator[3945]: Ignoring "noauto" for root device
	[ +18.323299] kauditd_printk_skb: 68 callbacks suppressed
	[Dec16 20:35] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.607334] systemd-fstab-generator[12068]: Ignoring "noauto" for root device
	[  +5.637391] systemd-fstab-generator[12668]: Ignoring "noauto" for root device
	[  +0.456450] systemd-fstab-generator[12802]: Ignoring "noauto" for root device
	
	
	==> etcd [6f91b5d2d6fc] <==
	{"level":"info","ts":"2024-12-16T20:35:47.221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-16T20:35:47.221Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-16T20:35:47.222Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T20:35:47.222Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T20:35:47.222Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-16T20:35:47.222Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-16T20:35:47.222Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T20:35:47.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-16T20:35:47.918Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-868000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:35:47.922Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-16T20:35:47.923Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:35:47.923Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:35:47.939Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:35:47.940Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:40:09 up 9 min,  0 users,  load average: 0.43, 0.48, 0.25
	Linux running-upgrade-868000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2d72cd87e3d8] <==
	I1216 20:35:49.200538       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 20:35:49.200605       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1216 20:35:49.202500       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1216 20:35:49.202892       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 20:35:49.202904       1 cache.go:39] Caches are synced for autoregister controller
	I1216 20:35:49.216852       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1216 20:35:49.224843       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1216 20:35:49.941494       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1216 20:35:50.104981       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1216 20:35:50.107540       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1216 20:35:50.107561       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 20:35:50.229871       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 20:35:50.239361       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 20:35:50.276368       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1216 20:35:50.278018       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1216 20:35:50.278403       1 controller.go:611] quota admission added evaluator for: endpoints
	I1216 20:35:50.279672       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 20:35:51.235396       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1216 20:35:51.672756       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1216 20:35:51.676512       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1216 20:35:51.680972       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1216 20:35:51.731994       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 20:36:04.485467       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1216 20:36:05.034930       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1216 20:36:05.698146       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [cca41d4888dc] <==
	I1216 20:36:04.105540       1 shared_informer.go:262] Caches are synced for persistent volume
	I1216 20:36:04.132740       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1216 20:36:04.132786       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1216 20:36:04.132808       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1216 20:36:04.132843       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1216 20:36:04.149035       1 shared_informer.go:262] Caches are synced for TTL
	I1216 20:36:04.183994       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1216 20:36:04.185609       1 shared_informer.go:262] Caches are synced for taint
	I1216 20:36:04.185680       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1216 20:36:04.185733       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-868000. Assuming now as a timestamp.
	I1216 20:36:04.185771       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1216 20:36:04.185905       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1216 20:36:04.186115       1 event.go:294] "Event occurred" object="running-upgrade-868000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-868000 event: Registered Node running-upgrade-868000 in Controller"
	I1216 20:36:04.189476       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 20:36:04.203981       1 shared_informer.go:262] Caches are synced for stateful set
	I1216 20:36:04.252368       1 shared_informer.go:262] Caches are synced for attach detach
	I1216 20:36:04.264049       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 20:36:04.283432       1 shared_informer.go:262] Caches are synced for daemon sets
	I1216 20:36:04.486974       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1216 20:36:04.707552       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 20:36:04.737669       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 20:36:04.737681       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1216 20:36:05.037823       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t6m6d"
	I1216 20:36:05.086058       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vhq9x"
	I1216 20:36:05.088409       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nns5v"
	
	
	==> kube-proxy [bd335ebc69ca] <==
	I1216 20:36:05.677730       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1216 20:36:05.677753       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1216 20:36:05.677762       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1216 20:36:05.695802       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1216 20:36:05.695811       1 server_others.go:206] "Using iptables Proxier"
	I1216 20:36:05.695824       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1216 20:36:05.695921       1 server.go:661] "Version info" version="v1.24.1"
	I1216 20:36:05.695924       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:36:05.696563       1 config.go:317] "Starting service config controller"
	I1216 20:36:05.696568       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1216 20:36:05.696577       1 config.go:226] "Starting endpoint slice config controller"
	I1216 20:36:05.696579       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1216 20:36:05.696801       1 config.go:444] "Starting node config controller"
	I1216 20:36:05.696803       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1216 20:36:05.796964       1 shared_informer.go:262] Caches are synced for service config
	I1216 20:36:05.796991       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1216 20:36:05.797029       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [15f72a877fae] <==
	W1216 20:35:49.163224       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 20:35:49.163485       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1216 20:35:49.163234       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 20:35:49.163532       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1216 20:35:49.163263       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 20:35:49.163564       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 20:35:49.163274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 20:35:49.163622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1216 20:35:49.163285       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 20:35:49.163653       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1216 20:35:49.163295       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 20:35:49.163698       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1216 20:35:49.163307       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 20:35:49.163733       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1216 20:35:49.163328       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 20:35:49.163782       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1216 20:35:49.163616       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 20:35:49.163817       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1216 20:35:50.019749       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 20:35:50.019770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1216 20:35:50.099148       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 20:35:50.099166       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1216 20:35:50.144199       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 20:35:50.144292       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1216 20:35:51.852780       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-12-16 20:30:51 UTC, ends at Mon 2024-12-16 20:40:09 UTC. --
	Dec 16 20:35:53 running-upgrade-868000 kubelet[12674]: E1216 20:35:53.506640   12674 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-868000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-868000"
	Dec 16 20:35:53 running-upgrade-868000 kubelet[12674]: E1216 20:35:53.709414   12674 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-868000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-868000"
	Dec 16 20:35:53 running-upgrade-868000 kubelet[12674]: I1216 20:35:53.900389   12674 request.go:601] Waited for 1.123601498s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Dec 16 20:35:53 running-upgrade-868000 kubelet[12674]: E1216 20:35:53.904655   12674 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-868000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-868000"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: I1216 20:36:04.139864   12674 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: I1216 20:36:04.140177   12674 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: I1216 20:36:04.191502   12674 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: I1216 20:36:04.242045   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxdlh\" (UniqueName: \"kubernetes.io/projected/560450df-f2e5-484d-bca7-449f14064586-kube-api-access-xxdlh\") pod \"storage-provisioner\" (UID: \"560450df-f2e5-484d-bca7-449f14064586\") " pod="kube-system/storage-provisioner"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: I1216 20:36:04.242088   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/560450df-f2e5-484d-bca7-449f14064586-tmp\") pod \"storage-provisioner\" (UID: \"560450df-f2e5-484d-bca7-449f14064586\") " pod="kube-system/storage-provisioner"
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: E1216 20:36:04.346882   12674 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: E1216 20:36:04.346901   12674 projected.go:192] Error preparing data for projected volume kube-api-access-xxdlh for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 16 20:36:04 running-upgrade-868000 kubelet[12674]: E1216 20:36:04.346937   12674 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/560450df-f2e5-484d-bca7-449f14064586-kube-api-access-xxdlh podName:560450df-f2e5-484d-bca7-449f14064586 nodeName:}" failed. No retries permitted until 2024-12-16 20:36:04.846926254 +0000 UTC m=+13.194608105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xxdlh" (UniqueName: "kubernetes.io/projected/560450df-f2e5-484d-bca7-449f14064586-kube-api-access-xxdlh") pod "storage-provisioner" (UID: "560450df-f2e5-484d-bca7-449f14064586") : configmap "kube-root-ca.crt" not found
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.040907   12674 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.090280   12674 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.092384   12674 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149814   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd728844-8695-43bd-b33e-14496c51de6b-xtables-lock\") pod \"kube-proxy-t6m6d\" (UID: \"bd728844-8695-43bd-b33e-14496c51de6b\") " pod="kube-system/kube-proxy-t6m6d"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149834   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd728844-8695-43bd-b33e-14496c51de6b-lib-modules\") pod \"kube-proxy-t6m6d\" (UID: \"bd728844-8695-43bd-b33e-14496c51de6b\") " pod="kube-system/kube-proxy-t6m6d"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149862   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlf9\" (UniqueName: \"kubernetes.io/projected/63f2b165-acac-48a9-ac1d-d4c8f1d539c0-kube-api-access-khlf9\") pod \"coredns-6d4b75cb6d-vhq9x\" (UID: \"63f2b165-acac-48a9-ac1d-d4c8f1d539c0\") " pod="kube-system/coredns-6d4b75cb6d-vhq9x"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149875   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wgrs\" (UniqueName: \"kubernetes.io/projected/f2d76906-2b56-4846-a7ce-f32dee761cea-kube-api-access-9wgrs\") pod \"coredns-6d4b75cb6d-nns5v\" (UID: \"f2d76906-2b56-4846-a7ce-f32dee761cea\") " pod="kube-system/coredns-6d4b75cb6d-nns5v"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149887   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd728844-8695-43bd-b33e-14496c51de6b-kube-proxy\") pod \"kube-proxy-t6m6d\" (UID: \"bd728844-8695-43bd-b33e-14496c51de6b\") " pod="kube-system/kube-proxy-t6m6d"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149898   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d76906-2b56-4846-a7ce-f32dee761cea-config-volume\") pod \"coredns-6d4b75cb6d-nns5v\" (UID: \"f2d76906-2b56-4846-a7ce-f32dee761cea\") " pod="kube-system/coredns-6d4b75cb6d-nns5v"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149911   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqkfr\" (UniqueName: \"kubernetes.io/projected/bd728844-8695-43bd-b33e-14496c51de6b-kube-api-access-pqkfr\") pod \"kube-proxy-t6m6d\" (UID: \"bd728844-8695-43bd-b33e-14496c51de6b\") " pod="kube-system/kube-proxy-t6m6d"
	Dec 16 20:36:05 running-upgrade-868000 kubelet[12674]: I1216 20:36:05.149920   12674 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f2b165-acac-48a9-ac1d-d4c8f1d539c0-config-volume\") pod \"coredns-6d4b75cb6d-vhq9x\" (UID: \"63f2b165-acac-48a9-ac1d-d4c8f1d539c0\") " pod="kube-system/coredns-6d4b75cb6d-vhq9x"
	Dec 16 20:39:44 running-upgrade-868000 kubelet[12674]: I1216 20:39:44.114055   12674 scope.go:110] "RemoveContainer" containerID="913aa0aa8c39d792402ecbaac81ccc3a7b68bdba2eccf2504b41a995ce3555ab"
	Dec 16 20:39:44 running-upgrade-868000 kubelet[12674]: I1216 20:39:44.137616   12674 scope.go:110] "RemoveContainer" containerID="bf6b78109554ced8f55f469a001155c1fb3076f9f566a7125eb47ebd7b6589c3"
	
	
	==> storage-provisioner [3b922961d012] <==
	I1216 20:36:05.284507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 20:36:05.289649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 20:36:05.289665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 20:36:05.294180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 20:36:05.294228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-868000_10385760-aae0-4f24-9c0e-9c031b52dec1!
	I1216 20:36:05.294866       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53c8036f-81fc-40df-b835-bd6b859bd869", APIVersion:"v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-868000_10385760-aae0-4f24-9c0e-9c031b52dec1 became leader
	I1216 20:36:05.395231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-868000_10385760-aae0-4f24-9c0e-9c031b52dec1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-868000 -n running-upgrade-868000
E1216 12:40:18.620010    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-868000 -n running-upgrade-868000: exit status 2 (15.599860084s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-868000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-868000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-868000
--- FAIL: TestRunningBinaryUpgrade (605.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
E1216 12:33:28.362180    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.900219166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-781000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-781000" primary control-plane node in "kubernetes-upgrade-781000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-781000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:33:22.746799    6288 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:33:22.746954    6288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:33:22.746957    6288 out.go:358] Setting ErrFile to fd 2...
	I1216 12:33:22.746959    6288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:33:22.747093    6288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:33:22.748267    6288 out.go:352] Setting JSON to false
	I1216 12:33:22.767382    6288 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3773,"bootTime":1734377429,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:33:22.767456    6288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:33:22.773868    6288 out.go:177] * [kubernetes-upgrade-781000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:33:22.777844    6288 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:33:22.777903    6288 notify.go:220] Checking for updates...
	I1216 12:33:22.784811    6288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:33:22.787752    6288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:33:22.790833    6288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:33:22.794809    6288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:33:22.797793    6288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:33:22.801131    6288 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:33:22.801213    6288 config.go:182] Loaded profile config "running-upgrade-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:33:22.801265    6288 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:33:22.805844    6288 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:33:22.812789    6288 start.go:297] selected driver: qemu2
	I1216 12:33:22.812796    6288 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:33:22.812802    6288 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:33:22.815342    6288 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:33:22.819830    6288 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:33:22.822820    6288 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 12:33:22.822840    6288 cni.go:84] Creating CNI manager for ""
	I1216 12:33:22.822874    6288 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 12:33:22.822913    6288 start.go:340] cluster config:
	{Name:kubernetes-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:33:22.827704    6288 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:33:22.835782    6288 out.go:177] * Starting "kubernetes-upgrade-781000" primary control-plane node in "kubernetes-upgrade-781000" cluster
	I1216 12:33:22.839827    6288 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 12:33:22.839845    6288 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 12:33:22.839858    6288 cache.go:56] Caching tarball of preloaded images
	I1216 12:33:22.839954    6288 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:33:22.839959    6288 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 12:33:22.840025    6288 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kubernetes-upgrade-781000/config.json ...
	I1216 12:33:22.840036    6288 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kubernetes-upgrade-781000/config.json: {Name:mk966c7a9f6b82322bf2c227647755d683842e65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:33:22.840413    6288 start.go:360] acquireMachinesLock for kubernetes-upgrade-781000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:33:22.840462    6288 start.go:364] duration metric: took 42.208µs to acquireMachinesLock for "kubernetes-upgrade-781000"
	I1216 12:33:22.840474    6288 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:33:22.840500    6288 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:33:22.848779    6288 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:33:22.873900    6288 start.go:159] libmachine.API.Create for "kubernetes-upgrade-781000" (driver="qemu2")
	I1216 12:33:22.873937    6288 client.go:168] LocalClient.Create starting
	I1216 12:33:22.874032    6288 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:33:22.874068    6288 main.go:141] libmachine: Decoding PEM data...
	I1216 12:33:22.874081    6288 main.go:141] libmachine: Parsing certificate...
	I1216 12:33:22.874119    6288 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:33:22.874151    6288 main.go:141] libmachine: Decoding PEM data...
	I1216 12:33:22.874160    6288 main.go:141] libmachine: Parsing certificate...
	I1216 12:33:22.874527    6288 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:33:23.094508    6288 main.go:141] libmachine: Creating SSH key...
	I1216 12:33:23.173112    6288 main.go:141] libmachine: Creating Disk image...
	I1216 12:33:23.173120    6288 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:33:23.173362    6288 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:23.188525    6288 main.go:141] libmachine: STDOUT: 
	I1216 12:33:23.188546    6288 main.go:141] libmachine: STDERR: 
	I1216 12:33:23.188624    6288 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2 +20000M
	I1216 12:33:23.197216    6288 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:33:23.197233    6288 main.go:141] libmachine: STDERR: 
	I1216 12:33:23.197253    6288 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:23.197260    6288 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:33:23.197272    6288 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:33:23.197309    6288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:f3:7d:50:b1:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:23.199117    6288 main.go:141] libmachine: STDOUT: 
	I1216 12:33:23.199135    6288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:33:23.199156    6288 client.go:171] duration metric: took 325.210667ms to LocalClient.Create
	I1216 12:33:25.201569    6288 start.go:128] duration metric: took 2.360990834s to createHost
	I1216 12:33:25.201713    6288 start.go:83] releasing machines lock for "kubernetes-upgrade-781000", held for 2.361221625s
	W1216 12:33:25.201772    6288 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:33:25.213005    6288 out.go:177] * Deleting "kubernetes-upgrade-781000" in qemu2 ...
	W1216 12:33:25.244371    6288 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:33:25.244396    6288 start.go:729] Will try again in 5 seconds ...
	I1216 12:33:30.245817    6288 start.go:360] acquireMachinesLock for kubernetes-upgrade-781000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:33:30.246113    6288 start.go:364] duration metric: took 251.208µs to acquireMachinesLock for "kubernetes-upgrade-781000"
	I1216 12:33:30.246148    6288 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:33:30.246208    6288 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:33:30.255690    6288 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:33:30.280361    6288 start.go:159] libmachine.API.Create for "kubernetes-upgrade-781000" (driver="qemu2")
	I1216 12:33:30.280402    6288 client.go:168] LocalClient.Create starting
	I1216 12:33:30.280512    6288 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:33:30.280577    6288 main.go:141] libmachine: Decoding PEM data...
	I1216 12:33:30.280589    6288 main.go:141] libmachine: Parsing certificate...
	I1216 12:33:30.280630    6288 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:33:30.280671    6288 main.go:141] libmachine: Decoding PEM data...
	I1216 12:33:30.280679    6288 main.go:141] libmachine: Parsing certificate...
	I1216 12:33:30.281111    6288 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:33:30.443647    6288 main.go:141] libmachine: Creating SSH key...
	I1216 12:33:30.545060    6288 main.go:141] libmachine: Creating Disk image...
	I1216 12:33:30.545066    6288 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:33:30.545294    6288 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:30.555360    6288 main.go:141] libmachine: STDOUT: 
	I1216 12:33:30.555381    6288 main.go:141] libmachine: STDERR: 
	I1216 12:33:30.555453    6288 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2 +20000M
	I1216 12:33:30.564009    6288 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:33:30.564069    6288 main.go:141] libmachine: STDERR: 
	I1216 12:33:30.564086    6288 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:30.564090    6288 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:33:30.564098    6288 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:33:30.564131    6288 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:cf:73:bb:59:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:30.565991    6288 main.go:141] libmachine: STDOUT: 
	I1216 12:33:30.566030    6288 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:33:30.566044    6288 client.go:171] duration metric: took 285.634917ms to LocalClient.Create
	I1216 12:33:32.568176    6288 start.go:128] duration metric: took 2.321930708s to createHost
	I1216 12:33:32.568207    6288 start.go:83] releasing machines lock for "kubernetes-upgrade-781000", held for 2.322063542s
	W1216 12:33:32.568362    6288 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-781000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-781000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:33:32.581821    6288 out.go:201] 
	W1216 12:33:32.585883    6288 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:33:32.585910    6288 out.go:270] * 
	* 
	W1216 12:33:32.589554    6288 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:33:32.599838    6288 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-781000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-781000: (3.088384791s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-781000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-781000 status --format={{.Host}}: exit status 7 (66.837917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.197133125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-781000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-781000" primary control-plane node in "kubernetes-upgrade-781000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-781000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-781000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:33:35.806335    6325 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:33:35.806511    6325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:33:35.806515    6325 out.go:358] Setting ErrFile to fd 2...
	I1216 12:33:35.806517    6325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:33:35.806649    6325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:33:35.807784    6325 out.go:352] Setting JSON to false
	I1216 12:33:35.825898    6325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3786,"bootTime":1734377429,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:33:35.825967    6325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:33:35.830614    6325 out.go:177] * [kubernetes-upgrade-781000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:33:35.836579    6325 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:33:35.836653    6325 notify.go:220] Checking for updates...
	I1216 12:33:35.844394    6325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:33:35.847560    6325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:33:35.851598    6325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:33:35.852942    6325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:33:35.855556    6325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:33:35.858784    6325 config.go:182] Loaded profile config "kubernetes-upgrade-781000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 12:33:35.859061    6325 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:33:35.860737    6325 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:33:35.867551    6325 start.go:297] selected driver: qemu2
	I1216 12:33:35.867560    6325 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:33:35.867618    6325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:33:35.870151    6325 cni.go:84] Creating CNI manager for ""
	I1216 12:33:35.870187    6325 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:33:35.870216    6325 start.go:340] cluster config:
	{Name:kubernetes-upgrade-781000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-781000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:33:35.874481    6325 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:33:35.882532    6325 out.go:177] * Starting "kubernetes-upgrade-781000" primary control-plane node in "kubernetes-upgrade-781000" cluster
	I1216 12:33:35.886587    6325 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:33:35.886609    6325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:33:35.886619    6325 cache.go:56] Caching tarball of preloaded images
	I1216 12:33:35.886694    6325 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:33:35.886699    6325 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:33:35.886745    6325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kubernetes-upgrade-781000/config.json ...
	I1216 12:33:35.887111    6325 start.go:360] acquireMachinesLock for kubernetes-upgrade-781000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:33:35.887143    6325 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "kubernetes-upgrade-781000"
	I1216 12:33:35.887152    6325 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:33:35.887158    6325 fix.go:54] fixHost starting: 
	I1216 12:33:35.887275    6325 fix.go:112] recreateIfNeeded on kubernetes-upgrade-781000: state=Stopped err=<nil>
	W1216 12:33:35.887281    6325 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:33:35.895583    6325 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-781000" ...
	I1216 12:33:35.899549    6325 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:33:35.899601    6325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:cf:73:bb:59:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:35.901834    6325 main.go:141] libmachine: STDOUT: 
	I1216 12:33:35.901862    6325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:33:35.901895    6325 fix.go:56] duration metric: took 14.736ms for fixHost
	I1216 12:33:35.901900    6325 start.go:83] releasing machines lock for "kubernetes-upgrade-781000", held for 14.752667ms
	W1216 12:33:35.901906    6325 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:33:35.901951    6325 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:33:35.901955    6325 start.go:729] Will try again in 5 seconds ...
	I1216 12:33:40.904222    6325 start.go:360] acquireMachinesLock for kubernetes-upgrade-781000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:33:40.904760    6325 start.go:364] duration metric: took 407.167µs to acquireMachinesLock for "kubernetes-upgrade-781000"
	I1216 12:33:40.904833    6325 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:33:40.904856    6325 fix.go:54] fixHost starting: 
	I1216 12:33:40.905600    6325 fix.go:112] recreateIfNeeded on kubernetes-upgrade-781000: state=Stopped err=<nil>
	W1216 12:33:40.905628    6325 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:33:40.915154    6325 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-781000" ...
	I1216 12:33:40.921312    6325 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:33:40.921652    6325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:cf:73:bb:59:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubernetes-upgrade-781000/disk.qcow2
	I1216 12:33:40.932191    6325 main.go:141] libmachine: STDOUT: 
	I1216 12:33:40.932235    6325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:33:40.932321    6325 fix.go:56] duration metric: took 27.46625ms for fixHost
	I1216 12:33:40.932341    6325 start.go:83] releasing machines lock for "kubernetes-upgrade-781000", held for 27.556083ms
	W1216 12:33:40.932535    6325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-781000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-781000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:33:40.941272    6325 out.go:201] 
	W1216 12:33:40.944337    6325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:33:40.944361    6325 out.go:270] * 
	* 
	W1216 12:33:40.946711    6325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:33:40.956287    6325 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-781000 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-781000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-781000 version --output=json: exit status 1 (65.429917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-781000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-16 12:33:41.037171 -0800 PST m=+3550.915377251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-781000 -n kubernetes-upgrade-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-781000 -n kubernetes-upgrade-781000: exit status 7 (37.852334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-781000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-781000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-781000
--- FAIL: TestKubernetesUpgrade (18.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20091
- KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current798687118/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.05s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20091
- KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3361106963/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2501413304 start -p stopped-upgrade-349000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2501413304 start -p stopped-upgrade-349000 --memory=2200 --vm-driver=qemu2 : (39.209498959s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2501413304 -p stopped-upgrade-349000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2501413304 -p stopped-upgrade-349000 stop: (12.124677042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1216 12:35:18.633230    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:38:28.362754    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.591404541s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-349000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-349000" primary control-plane node in "stopped-upgrade-349000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-349000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:34:33.653743    6375 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:34:33.653922    6375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:34:33.653926    6375 out.go:358] Setting ErrFile to fd 2...
	I1216 12:34:33.653928    6375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:34:33.654095    6375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:34:33.655359    6375 out.go:352] Setting JSON to false
	I1216 12:34:33.675477    6375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3844,"bootTime":1734377429,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:34:33.675581    6375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:34:33.680559    6375 out.go:177] * [stopped-upgrade-349000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:34:33.688517    6375 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:34:33.688546    6375 notify.go:220] Checking for updates...
	I1216 12:34:33.696472    6375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:34:33.699490    6375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:34:33.703510    6375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:34:33.706557    6375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:34:33.709522    6375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:34:33.712781    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:34:33.716547    6375 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 12:34:33.719472    6375 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:34:33.722488    6375 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:34:33.728441    6375 start.go:297] selected driver: qemu2
	I1216 12:34:33.728518    6375 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:34:33.728579    6375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:34:33.731382    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:34:33.731416    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:34:33.731443    6375 start.go:340] cluster config:
	{Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:34:33.731495    6375 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:34:33.739540    6375 out.go:177] * Starting "stopped-upgrade-349000" primary control-plane node in "stopped-upgrade-349000" cluster
	I1216 12:34:33.743510    6375 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:34:33.743525    6375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1216 12:34:33.743537    6375 cache.go:56] Caching tarball of preloaded images
	I1216 12:34:33.743618    6375 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:34:33.743624    6375 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1216 12:34:33.743688    6375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/config.json ...
	I1216 12:34:33.744140    6375 start.go:360] acquireMachinesLock for stopped-upgrade-349000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:34:33.744184    6375 start.go:364] duration metric: took 38.75µs to acquireMachinesLock for "stopped-upgrade-349000"
	I1216 12:34:33.744192    6375 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:34:33.744197    6375 fix.go:54] fixHost starting: 
	I1216 12:34:33.744298    6375 fix.go:112] recreateIfNeeded on stopped-upgrade-349000: state=Stopped err=<nil>
	W1216 12:34:33.744306    6375 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:34:33.748322    6375 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-349000" ...
	I1216 12:34:33.756511    6375 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:34:33.756587    6375 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50988-:22,hostfwd=tcp::50989-:2376,hostname=stopped-upgrade-349000 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/disk.qcow2
	I1216 12:34:33.803880    6375 main.go:141] libmachine: STDOUT: 
	I1216 12:34:33.803911    6375 main.go:141] libmachine: STDERR: 
	I1216 12:34:33.803918    6375 main.go:141] libmachine: Waiting for VM to start (ssh -p 50988 docker@127.0.0.1)...
	I1216 12:34:53.079455    6375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/config.json ...
	I1216 12:34:53.079993    6375 machine.go:93] provisionDockerMachine start ...
	I1216 12:34:53.080137    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.080455    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.080468    6375 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 12:34:53.160390    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 12:34:53.160403    6375 buildroot.go:166] provisioning hostname "stopped-upgrade-349000"
	I1216 12:34:53.160472    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.160586    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.160597    6375 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-349000 && echo "stopped-upgrade-349000" | sudo tee /etc/hostname
	I1216 12:34:53.234162    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-349000
	
	I1216 12:34:53.234229    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.234341    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.234349    6375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-349000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-349000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-349000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 12:34:53.304081    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 12:34:53.304095    6375 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20091-990/.minikube CaCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20091-990/.minikube}
	I1216 12:34:53.304103    6375 buildroot.go:174] setting up certificates
	I1216 12:34:53.304108    6375 provision.go:84] configureAuth start
	I1216 12:34:53.304115    6375 provision.go:143] copyHostCerts
	I1216 12:34:53.304201    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem, removing ...
	I1216 12:34:53.304208    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem
	I1216 12:34:53.304329    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/key.pem (1675 bytes)
	I1216 12:34:53.304538    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem, removing ...
	I1216 12:34:53.304542    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem
	I1216 12:34:53.304604    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/ca.pem (1082 bytes)
	I1216 12:34:53.304719    6375 exec_runner.go:144] found /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem, removing ...
	I1216 12:34:53.304722    6375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem
	I1216 12:34:53.304786    6375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20091-990/.minikube/cert.pem (1123 bytes)
	I1216 12:34:53.304886    6375 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-349000 san=[127.0.0.1 localhost minikube stopped-upgrade-349000]
	I1216 12:34:53.361142    6375 provision.go:177] copyRemoteCerts
	I1216 12:34:53.361192    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 12:34:53.361200    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:53.398249    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 12:34:53.404893    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 12:34:53.412298    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 12:34:53.419668    6375 provision.go:87] duration metric: took 115.548459ms to configureAuth
	I1216 12:34:53.419677    6375 buildroot.go:189] setting minikube options for container-runtime
	I1216 12:34:53.419788    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:34:53.419836    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.419933    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.419938    6375 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 12:34:53.488108    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1216 12:34:53.488117    6375 buildroot.go:70] root file system type: tmpfs
	I1216 12:34:53.488170    6375 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 12:34:53.488236    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.488344    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.488378    6375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 12:34:53.558416    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 12:34:53.558474    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.558576    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.558584    6375 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 12:34:53.934476    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1216 12:34:53.934497    6375 machine.go:96] duration metric: took 854.486875ms to provisionDockerMachine
	I1216 12:34:53.934505    6375 start.go:293] postStartSetup for "stopped-upgrade-349000" (driver="qemu2")
	I1216 12:34:53.934512    6375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 12:34:53.934592    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 12:34:53.934603    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:53.971927    6375 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 12:34:53.973113    6375 info.go:137] Remote host: Buildroot 2021.02.12
	I1216 12:34:53.973122    6375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/addons for local assets ...
	I1216 12:34:53.973209    6375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20091-990/.minikube/files for local assets ...
	I1216 12:34:53.973358    6375 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem -> 14942.pem in /etc/ssl/certs
	I1216 12:34:53.973518    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 12:34:53.976414    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:34:53.983574    6375 start.go:296] duration metric: took 49.063333ms for postStartSetup
	I1216 12:34:53.983587    6375 fix.go:56] duration metric: took 20.239221709s for fixHost
	I1216 12:34:53.983629    6375 main.go:141] libmachine: Using SSH client type: native
	I1216 12:34:53.983742    6375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052931b0] 0x1052959f0 <nil>  [] 0s} localhost 50988 <nil> <nil>}
	I1216 12:34:53.983746    6375 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 12:34:54.050221    6375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381294.540987129
	
	I1216 12:34:54.050231    6375 fix.go:216] guest clock: 1734381294.540987129
	I1216 12:34:54.050235    6375 fix.go:229] Guest: 2024-12-16 12:34:54.540987129 -0800 PST Remote: 2024-12-16 12:34:53.983589 -0800 PST m=+20.360056626 (delta=557.398129ms)
	I1216 12:34:54.050246    6375 fix.go:200] guest clock delta is within tolerance: 557.398129ms
	I1216 12:34:54.050250    6375 start.go:83] releasing machines lock for "stopped-upgrade-349000", held for 20.305890833s
	I1216 12:34:54.050326    6375 ssh_runner.go:195] Run: cat /version.json
	I1216 12:34:54.050329    6375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 12:34:54.050335    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:34:54.050347    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	W1216 12:34:54.050838    6375 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51136->127.0.0.1:50988: read: connection reset by peer
	I1216 12:34:54.050855    6375 retry.go:31] will retry after 331.051942ms: ssh: handshake failed: read tcp 127.0.0.1:51136->127.0.0.1:50988: read: connection reset by peer
	W1216 12:34:54.431048    6375 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1216 12:34:54.431233    6375 ssh_runner.go:195] Run: systemctl --version
	I1216 12:34:54.434906    6375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 12:34:54.438085    6375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 12:34:54.438159    6375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 12:34:54.443217    6375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 12:34:54.450890    6375 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 12:34:54.450905    6375 start.go:495] detecting cgroup driver to use...
	I1216 12:34:54.451026    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:34:54.460143    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1216 12:34:54.464391    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 12:34:54.468268    6375 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 12:34:54.468302    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 12:34:54.471848    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:34:54.475173    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 12:34:54.478133    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 12:34:54.481051    6375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 12:34:54.484261    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 12:34:54.487130    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 12:34:54.490119    6375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 12:34:54.492916    6375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 12:34:54.496014    6375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 12:34:54.498821    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:54.586833    6375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 12:34:54.593245    6375 start.go:495] detecting cgroup driver to use...
	I1216 12:34:54.593352    6375 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 12:34:54.598977    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:34:54.608415    6375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 12:34:54.619556    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 12:34:54.624147    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 12:34:54.628867    6375 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1216 12:34:54.680228    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 12:34:54.686043    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 12:34:54.691854    6375 ssh_runner.go:195] Run: which cri-dockerd
	I1216 12:34:54.693050    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 12:34:54.695940    6375 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1216 12:34:54.700915    6375 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 12:34:54.788450    6375 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 12:34:54.868374    6375 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 12:34:54.868443    6375 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 12:34:54.873882    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:54.960804    6375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:34:56.100125    6375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.139295958s)
	I1216 12:34:56.100199    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 12:34:56.106828    6375 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 12:34:56.113319    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:34:56.118060    6375 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 12:34:56.179413    6375 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 12:34:56.240247    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:56.316199    6375 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 12:34:56.322711    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 12:34:56.326887    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:56.404418    6375 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 12:34:56.441626    6375 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 12:34:56.441744    6375 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 12:34:56.444700    6375 start.go:563] Will wait 60s for crictl version
	I1216 12:34:56.444768    6375 ssh_runner.go:195] Run: which crictl
	I1216 12:34:56.446311    6375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 12:34:56.461764    6375 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1216 12:34:56.461846    6375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:34:56.482458    6375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 12:34:56.502211    6375 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1216 12:34:56.502358    6375 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1216 12:34:56.503661    6375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 12:34:56.507628    6375 kubeadm.go:883] updating cluster {Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1216 12:34:56.507671    6375 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1216 12:34:56.507724    6375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:34:56.518040    6375 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:34:56.518048    6375 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:34:56.518103    6375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:34:56.521196    6375 ssh_runner.go:195] Run: which lz4
	I1216 12:34:56.522490    6375 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 12:34:56.523691    6375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 12:34:56.523701    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1216 12:34:57.492812    6375 docker.go:653] duration metric: took 970.357875ms to copy over tarball
	I1216 12:34:57.492886    6375 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 12:34:58.675860    6375 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.182947375s)
	I1216 12:34:58.675882    6375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 12:34:58.691812    6375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 12:34:58.694918    6375 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1216 12:34:58.700186    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:34:58.788313    6375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 12:35:00.560497    6375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.772149959s)
	I1216 12:35:00.560601    6375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 12:35:00.575003    6375 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 12:35:00.575015    6375 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1216 12:35:00.575020    6375 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 12:35:00.581779    6375 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:00.583374    6375 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:00.584844    6375 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:00.585131    6375 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:00.586124    6375 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:00.586178    6375 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:00.587547    6375 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:00.587585    6375 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:00.588682    6375 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:00.590072    6375 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:00.590187    6375 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 12:35:00.590237    6375 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:00.591096    6375 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:00.591571    6375 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:00.592408    6375 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 12:35:00.593008    6375 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.353066    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.364132    6375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1216 12:35:01.364171    6375 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.364222    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1216 12:35:01.372459    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.376058    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1216 12:35:01.376961    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.389817    6375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1216 12:35:01.389845    6375 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.389918    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1216 12:35:01.391369    6375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1216 12:35:01.391386    6375 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.391437    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1216 12:35:01.407434    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1216 12:35:01.409607    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1216 12:35:01.444936    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.455557    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.457100    6375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1216 12:35:01.457124    6375 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.457168    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1216 12:35:01.470705    6375 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1216 12:35:01.470716    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1216 12:35:01.470726    6375 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.470782    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1216 12:35:01.480995    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1216 12:35:01.564797    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 12:35:01.575198    6375 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1216 12:35:01.575220    6375 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1216 12:35:01.575282    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1216 12:35:01.589451    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1216 12:35:01.589594    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 12:35:01.591159    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1216 12:35:01.591169    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1216 12:35:01.599819    6375 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 12:35:01.599827    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1216 12:35:01.616869    6375 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1216 12:35:01.617021    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.625869    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1216 12:35:01.630481    6375 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1216 12:35:01.630507    6375 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.630581    6375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 12:35:01.641507    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 12:35:01.641657    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:35:01.643037    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1216 12:35:01.643054    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1216 12:35:01.657985    6375 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1216 12:35:01.658132    6375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.678177    6375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1216 12:35:01.678203    6375 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.678268    6375 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:35:01.694214    6375 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 12:35:01.694228    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1216 12:35:01.702615    6375 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 12:35:01.702783    6375 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:35:01.741373    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 12:35:01.741399    6375 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 12:35:01.741427    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1216 12:35:01.771613    6375 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 12:35:01.771626    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1216 12:35:02.004231    6375 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 12:35:02.004271    6375 cache_images.go:92] duration metric: took 1.429231333s to LoadCachedImages
	W1216 12:35:02.004306    6375 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1216 12:35:02.004312    6375 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1216 12:35:02.004367    6375 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-349000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 12:35:02.004439    6375 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 12:35:02.018121    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:35:02.018133    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:35:02.018144    6375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 12:35:02.018153    6375 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-349000 NodeName:stopped-upgrade-349000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 12:35:02.018230    6375 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-349000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 12:35:02.018305    6375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1216 12:35:02.021179    6375 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 12:35:02.021241    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 12:35:02.024320    6375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1216 12:35:02.029223    6375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 12:35:02.034152    6375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1216 12:35:02.039855    6375 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1216 12:35:02.041036    6375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 12:35:02.044816    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:35:02.129520    6375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:35:02.135880    6375 certs.go:68] Setting up /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000 for IP: 10.0.2.15
	I1216 12:35:02.135888    6375 certs.go:194] generating shared ca certs ...
	I1216 12:35:02.135897    6375 certs.go:226] acquiring lock for ca certs: {Name:mkaa7d3f47c3893d22672057b4e8b1df7ff583ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.136080    6375 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key
	I1216 12:35:02.136855    6375 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key
	I1216 12:35:02.136864    6375 certs.go:256] generating profile certs ...
	I1216 12:35:02.137131    6375 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key
	I1216 12:35:02.137146    6375 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09
	I1216 12:35:02.137159    6375 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1216 12:35:02.293901    6375 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 ...
	I1216 12:35:02.293915    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09: {Name:mk24cb9d1c208b94e44645be350fcae9c9cc59c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.294285    6375 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09 ...
	I1216 12:35:02.294290    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09: {Name:mkd8e63c1869763c83ae20b5c66ff321c7a7d066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.294464    6375 certs.go:381] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt.f7fa1d09 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt
	I1216 12:35:02.294599    6375 certs.go:385] copying /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key.f7fa1d09 -> /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key
	I1216 12:35:02.295006    6375 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.key
	I1216 12:35:02.295227    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem (1338 bytes)
	W1216 12:35:02.295471    6375 certs.go:480] ignoring /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494_empty.pem, impossibly tiny 0 bytes
	I1216 12:35:02.295479    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 12:35:02.295511    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem (1082 bytes)
	I1216 12:35:02.295539    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem (1123 bytes)
	I1216 12:35:02.295562    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/certs/key.pem (1675 bytes)
	I1216 12:35:02.295611    6375 certs.go:484] found cert: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem (1708 bytes)
	I1216 12:35:02.295996    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 12:35:02.303464    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 12:35:02.310544    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 12:35:02.317123    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 12:35:02.324254    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 12:35:02.330992    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 12:35:02.337253    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 12:35:02.344189    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 12:35:02.351214    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 12:35:02.357407    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/certs/1494.pem --> /usr/share/ca-certificates/1494.pem (1338 bytes)
	I1216 12:35:02.364500    6375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/ssl/certs/14942.pem --> /usr/share/ca-certificates/14942.pem (1708 bytes)
	I1216 12:35:02.371472    6375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 12:35:02.376534    6375 ssh_runner.go:195] Run: openssl version
	I1216 12:35:02.378428    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 12:35:02.381282    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.382651    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.382681    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 12:35:02.384346    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 12:35:02.387449    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1494.pem && ln -fs /usr/share/ca-certificates/1494.pem /etc/ssl/certs/1494.pem"
	I1216 12:35:02.390441    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.391806    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.391838    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1494.pem
	I1216 12:35:02.393550    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1494.pem /etc/ssl/certs/51391683.0"
	I1216 12:35:02.396808    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14942.pem && ln -fs /usr/share/ca-certificates/14942.pem /etc/ssl/certs/14942.pem"
	I1216 12:35:02.400063    6375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.401488    6375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.401518    6375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14942.pem
	I1216 12:35:02.403229    6375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14942.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 12:35:02.406093    6375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 12:35:02.407527    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 12:35:02.409731    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 12:35:02.411638    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 12:35:02.413804    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 12:35:02.415537    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 12:35:02.417200    6375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 12:35:02.419105    6375 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 12:35:02.419183    6375 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:35:02.429398    6375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 12:35:02.432549    6375 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 12:35:02.432559    6375 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 12:35:02.432591    6375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 12:35:02.436016    6375 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 12:35:02.436337    6375 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-349000" does not appear in /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:35:02.436437    6375 kubeconfig.go:62] /Users/jenkins/minikube-integration/20091-990/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-349000" cluster setting kubeconfig missing "stopped-upgrade-349000" context setting]
	I1216 12:35:02.436621    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:35:02.437090    6375 kapi.go:59] client config for stopped-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106cfef70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:35:02.437597    6375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 12:35:02.440434    6375 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-349000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1216 12:35:02.440440    6375 kubeadm.go:1160] stopping kube-system containers ...
	I1216 12:35:02.440487    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 12:35:02.450870    6375 docker.go:483] Stopping containers: [c238f990b3b5 195b09e77a13 a43b19631f1d 5c2af2bbc9dc 03ff67ad0d23 178b447de782 c3ca363e053e 22c7494ce80d]
	I1216 12:35:02.450948    6375 ssh_runner.go:195] Run: docker stop c238f990b3b5 195b09e77a13 a43b19631f1d 5c2af2bbc9dc 03ff67ad0d23 178b447de782 c3ca363e053e 22c7494ce80d
	I1216 12:35:02.461469    6375 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 12:35:02.467262    6375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:35:02.469984    6375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:35:02.469990    6375 kubeadm.go:157] found existing configuration files:
	
	I1216 12:35:02.470025    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf
	I1216 12:35:02.472713    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:35:02.472746    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:35:02.475886    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf
	I1216 12:35:02.478506    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:35:02.478548    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:35:02.481011    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf
	I1216 12:35:02.484116    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:35:02.484142    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:35:02.487048    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf
	I1216 12:35:02.489486    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:35:02.489520    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:35:02.492430    6375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:35:02.495538    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:02.517741    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.146436    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.265437    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.290801    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 12:35:03.316258    6375 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:35:03.316346    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:03.818414    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:04.318425    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:35:04.326084    6375 api_server.go:72] duration metric: took 1.009816458s to wait for apiserver process to appear ...
	I1216 12:35:04.326099    6375 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:35:04.326122    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:09.328248    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:09.328273    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:14.328533    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:14.328572    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:19.328921    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:19.328973    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:24.329416    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:24.329456    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:29.330376    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:29.330418    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:34.331228    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:34.331267    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:39.332529    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:39.332568    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:44.333931    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:44.333965    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:49.335600    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:49.335640    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:54.337800    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:54.337841    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:35:59.340195    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:35:59.340236    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:04.342499    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:04.342687    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:04.353658    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:04.353745    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:04.364381    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:04.364476    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:04.374921    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:04.375010    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:04.385189    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:04.385274    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:04.395082    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:04.395152    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:04.406013    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:04.406092    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:04.416755    6375 logs.go:282] 0 containers: []
	W1216 12:36:04.416767    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:04.416842    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:04.427327    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:04.427348    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:04.427355    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:04.431680    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:04.431688    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:04.550516    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:04.550530    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:04.563327    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:04.563341    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:04.574994    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:04.575007    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:04.587037    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:04.587053    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:04.602786    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:04.602797    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:04.620678    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:04.620692    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:04.655972    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:04.655985    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:04.670672    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:04.670682    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:04.685276    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:04.685286    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:04.696444    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:04.696455    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:04.721082    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:04.721104    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:04.759275    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:04.759287    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:04.774226    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:04.774237    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:04.786011    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:04.786023    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:04.797989    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:04.798001    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:07.314024    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:12.316318    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:12.316594    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:12.340054    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:12.340139    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:12.352794    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:12.352883    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:12.363622    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:12.363698    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:12.373618    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:12.373708    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:12.383766    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:12.383850    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:12.397521    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:12.397596    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:12.414260    6375 logs.go:282] 0 containers: []
	W1216 12:36:12.414275    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:12.414337    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:12.424898    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:12.424918    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:12.424924    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:12.429739    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:12.429748    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:12.469189    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:12.469203    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:12.484369    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:12.484383    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:12.495365    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:12.495378    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:12.507002    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:12.507013    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:12.543993    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:12.544002    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:12.558112    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:12.558127    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:12.581915    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:12.581924    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:12.606598    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:12.606609    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:12.618270    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:12.618282    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:12.629958    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:12.629969    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:12.645661    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:12.645672    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:12.657767    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:12.657780    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:12.672290    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:12.672299    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:12.683523    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:12.683533    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:12.705139    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:12.705151    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:15.222534    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:20.224980    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:20.225514    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:20.259129    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:20.259286    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:20.279500    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:20.279605    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:20.294335    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:20.294424    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:20.306891    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:20.306972    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:20.319145    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:20.319225    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:20.330300    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:20.330377    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:20.341242    6375 logs.go:282] 0 containers: []
	W1216 12:36:20.341255    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:20.341328    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:20.358558    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:20.358583    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:20.358589    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:20.382723    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:20.382733    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:20.394106    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:20.394117    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:20.406512    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:20.406525    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:20.421863    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:20.421874    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:20.438988    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:20.438999    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:20.450792    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:20.450803    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:20.475731    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:20.475740    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:20.513343    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:20.513353    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:20.528019    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:20.528032    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:20.541964    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:20.541973    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:20.553275    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:20.553289    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:20.570699    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:20.570709    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:20.584077    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:20.584091    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:20.595506    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:20.595517    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:20.599947    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:20.599954    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:20.633688    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:20.633702    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:23.150242    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:28.152598    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:28.152754    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:28.163296    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:28.163379    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:28.173680    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:28.173753    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:28.184395    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:28.184470    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:28.195077    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:28.195160    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:28.205469    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:28.205550    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:28.215659    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:28.215736    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:28.225840    6375 logs.go:282] 0 containers: []
	W1216 12:36:28.225854    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:28.225921    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:28.237466    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:28.237484    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:28.237489    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:28.250824    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:28.250839    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:28.287234    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:28.287246    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:28.299176    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:28.299188    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:28.314094    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:28.314105    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:28.339654    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:28.339663    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:28.351217    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:28.351229    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:28.365145    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:28.365156    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:28.379161    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:28.379171    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:28.390873    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:28.390883    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:28.402714    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:28.402723    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:28.414936    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:28.414947    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:28.426348    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:28.426360    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:28.440184    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:28.440194    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:28.457198    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:28.457211    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:28.497036    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:28.497052    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:28.501918    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:28.501924    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:31.028497    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:36.031128    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:36.031314    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:36.049654    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:36.049766    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:36.063512    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:36.063600    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:36.075317    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:36.075403    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:36.086424    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:36.086516    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:36.096644    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:36.096719    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:36.107067    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:36.107144    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:36.116898    6375 logs.go:282] 0 containers: []
	W1216 12:36:36.116911    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:36.116978    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:36.127050    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:36.127069    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:36.127074    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:36.131174    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:36.131183    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:36.166059    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:36.166071    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:36.198640    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:36.198653    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:36.210133    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:36.210146    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:36.227377    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:36.227386    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:36.253209    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:36.253217    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:36.292445    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:36.292457    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:36.307785    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:36.307797    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:36.321754    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:36.321768    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:36.333791    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:36.333802    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:36.345924    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:36.345934    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:36.366782    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:36.366798    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:36.378914    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:36.378928    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:36.394320    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:36.394331    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:36.409665    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:36.409676    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:36.423863    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:36.423873    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:38.936899    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:43.939197    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:43.939477    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:43.963551    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:43.963694    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:43.980355    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:43.980452    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:43.993528    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:43.993604    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:44.005182    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:44.005266    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:44.016094    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:44.016177    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:44.027184    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:44.027255    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:44.038153    6375 logs.go:282] 0 containers: []
	W1216 12:36:44.038165    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:44.038232    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:44.049473    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:44.049492    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:44.049518    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:44.064228    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:44.064240    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:44.077927    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:44.077939    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:44.089791    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:44.089802    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:44.102092    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:44.102105    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:44.133828    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:44.133841    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:44.149533    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:44.149549    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:44.165190    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:44.165201    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:44.178096    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:44.178108    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:44.216225    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:44.216233    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:44.220176    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:44.220182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:44.257606    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:44.257618    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:44.275390    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:44.275401    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:44.293704    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:44.293719    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:44.305014    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:44.305025    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:44.316810    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:44.316824    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:44.332671    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:44.332681    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:46.857983    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:51.860440    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:51.860634    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:51.874705    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:51.874791    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:51.889111    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:51.889196    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:51.899503    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:51.899587    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:51.910386    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:51.910462    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:51.921250    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:51.921320    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:51.931964    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:51.932035    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:51.942012    6375 logs.go:282] 0 containers: []
	W1216 12:36:51.942028    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:51.942104    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:51.952643    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:51.952661    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:36:51.952666    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:36:51.957290    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:51.957300    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:36:51.981691    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:36:51.981706    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:36:51.999044    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:36:51.999057    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:36:52.017001    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:36:52.017012    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:36:52.046025    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:52.046037    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:52.061510    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:52.061523    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:52.077306    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:52.077317    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:52.090470    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:36:52.090482    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:36:52.117090    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:36:52.117110    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:36:52.158586    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:36:52.158598    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:36:52.171687    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:36:52.171695    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:36:52.189066    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:52.189081    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:52.200941    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:52.200953    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:52.238558    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:36:52.238570    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:36:52.255295    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:36:52.255313    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:36:52.271909    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:36:52.271926    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:36:54.787925    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:36:59.789798    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:36:59.790009    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:36:59.806394    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:36:59.806497    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:36:59.819404    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:36:59.819488    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:36:59.830640    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:36:59.830720    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:36:59.841691    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:36:59.841771    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:36:59.851971    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:36:59.852048    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:36:59.862013    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:36:59.862080    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:36:59.872205    6375 logs.go:282] 0 containers: []
	W1216 12:36:59.872217    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:36:59.872283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:36:59.883447    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:36:59.883465    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:36:59.883470    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:36:59.899024    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:36:59.899037    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:36:59.920908    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:36:59.920919    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:36:59.933002    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:36:59.933013    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:36:59.946012    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:36:59.946024    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:36:59.984271    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:36:59.984286    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:00.010786    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:00.010805    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:00.026151    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:00.026160    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:00.042925    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:00.042941    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:00.056885    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:00.056902    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:00.098143    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:00.098151    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:00.102972    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:00.102984    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:00.116053    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:00.116066    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:00.128126    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:00.128138    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:00.141545    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:00.141554    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:00.158521    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:00.158532    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:00.183705    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:00.183716    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:02.711201    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:07.713655    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:07.713823    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:07.729924    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:07.730011    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:07.742449    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:07.742534    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:07.776320    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:07.776403    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:07.790087    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:07.790169    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:07.802207    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:07.802290    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:07.817823    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:07.817907    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:07.834480    6375 logs.go:282] 0 containers: []
	W1216 12:37:07.834492    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:07.834553    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:07.852660    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:07.852674    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:07.852679    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:07.891718    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:07.891733    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:07.908618    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:07.908632    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:07.923316    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:07.923330    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:07.935881    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:07.935895    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:07.962378    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:07.962389    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:07.984819    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:07.984828    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:08.004381    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:08.004398    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:08.016628    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:08.016640    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:08.021116    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:08.021127    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:08.060304    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:08.060320    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:08.080226    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:08.080241    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:08.095129    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:08.095139    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:08.106807    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:08.106819    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:08.123373    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:08.123387    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:08.134892    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:08.134908    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:08.146394    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:08.146405    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:10.673571    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:15.676217    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:15.676283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:15.688251    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:15.688324    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:15.699567    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:15.699647    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:15.711160    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:15.711241    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:15.723506    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:15.723632    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:15.735018    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:15.735097    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:15.746896    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:15.746982    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:15.758017    6375 logs.go:282] 0 containers: []
	W1216 12:37:15.758030    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:15.758101    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:15.770270    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:15.770290    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:15.770296    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:15.809906    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:15.809920    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:15.822723    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:15.822736    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:15.836480    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:15.836497    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:15.877173    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:15.877188    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:15.896163    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:15.896174    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:15.912172    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:15.912183    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:15.924151    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:15.924163    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:15.928497    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:15.928503    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:15.941916    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:15.941923    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:15.956686    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:15.956700    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:15.972598    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:15.972613    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:15.987534    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:15.987544    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:16.007563    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:16.007577    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:16.019212    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:16.019226    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:16.036395    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:16.036405    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:16.060892    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:16.060899    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:18.586329    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:23.588660    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:23.588766    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:23.600027    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:23.600108    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:23.614497    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:23.614583    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:23.627113    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:23.627190    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:23.639408    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:23.639493    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:23.651182    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:23.651262    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:23.662293    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:23.662382    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:23.677737    6375 logs.go:282] 0 containers: []
	W1216 12:37:23.677750    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:23.677821    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:23.689835    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:23.689854    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:23.689860    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:23.716896    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:23.716912    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:23.750329    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:23.750340    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:23.762589    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:23.762600    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:23.775048    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:23.775061    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:23.779928    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:23.779936    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:23.797181    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:23.797192    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:23.809546    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:23.809557    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:23.821695    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:23.821710    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:23.859179    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:23.859190    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:23.870698    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:23.870709    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:23.884201    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:23.884212    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:23.902151    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:23.902164    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:23.919172    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:23.919182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:23.942574    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:23.942582    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:23.975926    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:23.975937    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:23.989999    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:23.990008    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:26.506809    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:31.509135    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:31.509239    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:31.520638    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:31.520723    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:31.535072    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:31.535153    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:31.546679    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:31.546763    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:31.558273    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:31.558358    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:31.569783    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:31.569864    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:31.581173    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:31.581256    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:31.592108    6375 logs.go:282] 0 containers: []
	W1216 12:37:31.592118    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:31.592190    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:31.604265    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:31.604285    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:31.604291    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:31.642990    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:31.643004    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:31.654729    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:31.654742    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:31.667565    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:31.667577    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:31.694066    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:31.694077    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:31.708552    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:31.708562    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:31.720669    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:31.720679    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:31.735569    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:31.735581    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:31.770456    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:31.770469    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:31.784504    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:31.784518    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:31.802007    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:31.802020    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:31.817360    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:31.817370    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:31.828932    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:31.828945    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:31.853587    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:31.853598    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:31.865858    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:31.865871    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:31.871145    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:31.871157    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:31.886245    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:31.886259    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:34.400599    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:39.402915    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:39.403026    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:39.415538    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:39.415614    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:39.427336    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:39.427421    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:39.438811    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:39.438897    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:39.449993    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:39.450068    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:39.461591    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:39.461670    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:39.472629    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:39.472699    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:39.482947    6375 logs.go:282] 0 containers: []
	W1216 12:37:39.482959    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:39.483027    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:39.494816    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:39.494833    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:39.494839    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:39.534778    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:39.534791    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:39.559817    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:39.559830    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:39.574723    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:39.574736    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:39.586001    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:39.586012    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:39.598312    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:39.598323    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:39.612168    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:39.612180    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:39.626902    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:39.626914    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:39.643818    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:39.643831    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:39.658169    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:39.658182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:39.682990    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:39.682998    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:39.718164    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:39.718175    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:39.732108    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:39.732122    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:39.744370    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:39.744383    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:39.759959    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:39.759970    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:39.776386    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:39.776400    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:39.780404    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:39.780411    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:42.292859    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:47.293190    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:47.293289    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:47.308156    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:47.308247    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:47.319769    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:47.319852    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:47.331318    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:47.331402    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:47.343133    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:47.343220    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:47.358356    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:47.358430    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:47.371888    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:47.371960    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:47.382241    6375 logs.go:282] 0 containers: []
	W1216 12:37:47.382252    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:47.382314    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:47.392936    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:47.392958    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:47.392968    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:47.408479    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:47.408491    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:47.420702    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:47.420716    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:47.459221    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:47.459231    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:47.463449    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:47.463456    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:47.474771    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:47.474783    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:47.497006    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:47.497012    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:47.513240    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:47.513254    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:47.538396    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:47.538407    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:47.549580    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:47.549592    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:47.560813    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:47.560824    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:47.578034    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:47.578045    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:47.591822    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:47.591835    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:47.629561    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:47.629576    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:47.644520    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:47.644537    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:47.659478    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:47.659488    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:47.671173    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:47.671184    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:50.187508    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:37:55.189899    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:37:55.190008    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:37:55.202092    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:37:55.202174    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:37:55.213090    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:37:55.213176    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:37:55.224776    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:37:55.224858    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:37:55.239115    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:37:55.239193    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:37:55.249841    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:37:55.249918    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:37:55.262244    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:37:55.262322    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:37:55.272111    6375 logs.go:282] 0 containers: []
	W1216 12:37:55.272125    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:37:55.272196    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:37:55.282962    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:37:55.282979    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:37:55.282985    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:37:55.322876    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:37:55.322890    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:37:55.327236    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:37:55.327245    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:37:55.360680    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:37:55.360690    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:37:55.376695    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:37:55.376705    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:37:55.393348    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:37:55.393360    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:37:55.405463    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:37:55.405473    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:37:55.416952    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:37:55.416963    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:37:55.431521    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:37:55.431532    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:37:55.443108    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:37:55.443120    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:37:55.456464    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:37:55.456474    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:37:55.468698    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:37:55.468710    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:37:55.480650    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:37:55.480664    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:37:55.509339    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:37:55.509350    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:37:55.523621    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:37:55.523635    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:37:55.540357    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:37:55.540371    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:37:55.553207    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:37:55.553216    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:37:58.077223    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:03.079536    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:03.079674    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:03.092404    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:03.092485    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:03.104198    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:03.104274    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:03.121031    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:03.121105    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:03.132167    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:03.132252    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:03.142777    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:03.142853    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:03.153244    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:03.153324    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:03.165048    6375 logs.go:282] 0 containers: []
	W1216 12:38:03.165060    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:03.165123    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:03.175226    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:03.175243    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:03.175248    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:03.188500    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:03.188511    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:03.200031    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:03.200042    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:03.211890    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:03.211900    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:03.235815    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:03.235825    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:03.248352    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:03.248366    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:03.262308    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:03.262319    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:03.287316    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:03.287328    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:03.298398    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:03.298409    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:03.310446    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:03.310457    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:03.324200    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:03.324213    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:03.340785    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:03.340795    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:03.357178    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:03.357193    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:03.396092    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:03.396108    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:03.400480    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:03.400490    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:03.436407    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:03.436418    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:03.450882    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:03.450895    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:05.970151    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:10.972449    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:10.972547    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:10.984527    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:10.984602    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:10.995775    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:10.995860    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:11.007017    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:11.007096    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:11.018161    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:11.018249    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:11.028827    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:11.028902    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:11.039295    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:11.039365    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:11.049905    6375 logs.go:282] 0 containers: []
	W1216 12:38:11.049917    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:11.049984    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:11.060272    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:11.060292    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:11.060297    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:11.074172    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:11.074181    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:11.099955    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:11.099965    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:11.111465    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:11.111477    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:11.128128    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:11.128138    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:11.140060    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:11.140069    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:11.163583    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:11.163590    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:11.202236    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:11.202248    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:11.238651    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:11.238665    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:11.252582    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:11.252596    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:11.264129    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:11.264140    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:11.279333    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:11.279344    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:11.283659    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:11.283665    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:11.301221    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:11.301231    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:11.313110    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:11.313122    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:11.328152    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:11.328163    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:11.340126    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:11.340138    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:13.855378    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:18.855970    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:18.856088    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:18.870176    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:18.870263    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:18.883058    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:18.883151    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:18.894495    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:18.894576    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:18.904502    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:18.904581    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:18.915103    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:18.915184    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:18.925892    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:18.925969    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:18.936342    6375 logs.go:282] 0 containers: []
	W1216 12:38:18.936356    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:18.936418    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:18.947131    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:18.947148    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:18.947154    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:18.961973    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:18.961985    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:18.973706    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:18.973717    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:18.989002    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:18.989014    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:19.006284    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:19.006294    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:19.019517    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:19.019528    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:19.032921    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:19.032932    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:19.044335    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:19.044344    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:19.055276    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:19.055288    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:19.059834    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:19.059843    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:19.094373    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:19.094385    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:19.118827    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:19.118838    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:19.130606    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:19.130617    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:19.153292    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:19.153300    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:19.189656    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:19.189663    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:19.204325    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:19.204337    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:19.215864    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:19.215878    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:21.730226    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:26.732807    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:26.732916    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:26.744780    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:26.744872    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:26.765976    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:26.766058    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:26.777260    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:26.777341    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:26.787788    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:26.787865    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:26.798730    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:26.798811    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:26.809352    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:26.809438    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:26.819179    6375 logs.go:282] 0 containers: []
	W1216 12:38:26.819192    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:26.819257    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:26.835627    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:26.835646    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:26.835651    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:26.850249    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:26.850259    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:26.862995    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:26.863010    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:26.878597    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:26.878607    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:26.891688    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:26.891702    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:26.903832    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:26.903844    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:26.917761    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:26.917771    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:26.931973    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:26.931987    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:26.943804    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:26.943821    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:26.955699    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:26.955709    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:26.973678    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:26.973689    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:27.012065    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:27.012073    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:27.016500    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:27.016505    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:27.050330    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:27.050342    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:27.069857    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:27.069868    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:27.094389    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:27.094400    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:27.106399    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:27.106414    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:29.631290    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:34.633599    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:34.633711    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:34.644288    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:34.644368    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:34.658710    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:34.658779    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:34.669336    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:34.669417    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:34.682645    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:34.682718    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:34.701393    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:34.701471    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:34.712430    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:34.712501    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:34.722746    6375 logs.go:282] 0 containers: []
	W1216 12:38:34.722757    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:34.722819    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:34.733442    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:34.733463    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:34.733469    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:34.747709    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:34.747719    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:34.765263    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:34.765274    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:34.769464    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:34.769471    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:34.781031    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:34.781043    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:34.792858    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:34.792872    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:34.804473    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:34.804484    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:34.822041    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:34.822054    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:34.833764    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:34.833775    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:34.845700    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:34.845710    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:34.870120    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:34.870131    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:34.907559    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:34.907572    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:34.951811    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:34.951822    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:34.966211    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:34.966221    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:34.980218    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:34.980230    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:34.995449    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:34.995459    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:35.007138    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:35.007150    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:37.530176    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:42.532491    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:42.532680    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:42.544174    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:42.544264    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:42.554606    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:42.554685    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:42.564882    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:42.564959    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:42.576011    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:42.576086    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:42.591133    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:42.591209    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:42.602217    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:42.602283    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:42.612062    6375 logs.go:282] 0 containers: []
	W1216 12:38:42.612073    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:42.612137    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:42.623790    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:42.623815    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:42.623822    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:42.660147    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:42.660154    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:42.695398    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:42.695412    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:42.710696    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:42.710707    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:42.721815    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:42.721825    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:42.734055    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:42.734066    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:42.749533    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:42.749544    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:42.770275    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:42.770285    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:42.794049    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:42.794059    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:42.809477    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:42.809488    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:42.820990    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:42.821004    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:42.825102    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:42.825110    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:42.850718    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:42.850735    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:42.863734    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:42.863745    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:42.875563    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:42.875575    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:42.888148    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:42.888160    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:42.902947    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:42.902961    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:45.418119    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:50.420569    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:50.420728    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:50.432524    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:50.432606    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:50.443542    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:50.443628    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:50.454190    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:50.454271    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:50.464959    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:50.465032    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:50.476141    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:50.476226    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:50.487642    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:50.487721    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:50.498081    6375 logs.go:282] 0 containers: []
	W1216 12:38:50.498091    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:50.498154    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:50.508749    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:50.508765    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:50.508772    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:50.522423    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:50.522435    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:50.534246    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:50.534258    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:50.557482    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:50.557489    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:50.572035    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:50.572044    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:50.609428    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:50.609435    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:38:50.613499    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:50.613506    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:50.627089    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:50.627099    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:50.638555    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:50.638566    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:50.655898    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:50.655908    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:50.667681    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:50.667692    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:50.682186    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:50.682197    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:50.706534    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:50.706545    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:50.720670    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:50.720681    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:50.742630    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:50.742640    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:50.780256    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:50.780270    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:50.801374    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:50.801390    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:53.315055    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:38:58.317379    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:38:58.317482    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:38:58.333275    6375 logs.go:282] 2 containers: [5dded5effd54 a43b19631f1d]
	I1216 12:38:58.333357    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:38:58.346443    6375 logs.go:282] 2 containers: [7f47c1427198 195b09e77a13]
	I1216 12:38:58.346522    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:38:58.356494    6375 logs.go:282] 1 containers: [207260e29059]
	I1216 12:38:58.356565    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:38:58.367577    6375 logs.go:282] 2 containers: [2f45cb4b5fbe 5c2af2bbc9dc]
	I1216 12:38:58.367656    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:38:58.378352    6375 logs.go:282] 1 containers: [d6aa72077d82]
	I1216 12:38:58.378427    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:38:58.388899    6375 logs.go:282] 2 containers: [ea99512a561d c238f990b3b5]
	I1216 12:38:58.388975    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:38:58.399441    6375 logs.go:282] 0 containers: []
	W1216 12:38:58.399451    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:38:58.399513    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:38:58.409632    6375 logs.go:282] 2 containers: [e34a292c7fc0 972082680dc6]
	I1216 12:38:58.409650    6375 logs.go:123] Gathering logs for kube-controller-manager [ea99512a561d] ...
	I1216 12:38:58.409656    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea99512a561d"
	I1216 12:38:58.429290    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:38:58.429300    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:38:58.468758    6375 logs.go:123] Gathering logs for etcd [7f47c1427198] ...
	I1216 12:38:58.468770    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47c1427198"
	I1216 12:38:58.490747    6375 logs.go:123] Gathering logs for etcd [195b09e77a13] ...
	I1216 12:38:58.490759    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 195b09e77a13"
	I1216 12:38:58.505301    6375 logs.go:123] Gathering logs for kube-proxy [d6aa72077d82] ...
	I1216 12:38:58.505311    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6aa72077d82"
	I1216 12:38:58.517350    6375 logs.go:123] Gathering logs for kube-controller-manager [c238f990b3b5] ...
	I1216 12:38:58.517362    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c238f990b3b5"
	I1216 12:38:58.531331    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:38:58.531341    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:38:58.543690    6375 logs.go:123] Gathering logs for kube-apiserver [a43b19631f1d] ...
	I1216 12:38:58.543702    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a43b19631f1d"
	I1216 12:38:58.568759    6375 logs.go:123] Gathering logs for coredns [207260e29059] ...
	I1216 12:38:58.568771    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207260e29059"
	I1216 12:38:58.584648    6375 logs.go:123] Gathering logs for kube-scheduler [2f45cb4b5fbe] ...
	I1216 12:38:58.584660    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f45cb4b5fbe"
	I1216 12:38:58.596514    6375 logs.go:123] Gathering logs for storage-provisioner [e34a292c7fc0] ...
	I1216 12:38:58.596524    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e34a292c7fc0"
	I1216 12:38:58.608154    6375 logs.go:123] Gathering logs for storage-provisioner [972082680dc6] ...
	I1216 12:38:58.608164    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 972082680dc6"
	I1216 12:38:58.619258    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:38:58.619271    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:38:58.642298    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:38:58.642306    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:38:58.678135    6375 logs.go:123] Gathering logs for kube-apiserver [5dded5effd54] ...
	I1216 12:38:58.678149    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dded5effd54"
	I1216 12:38:58.696508    6375 logs.go:123] Gathering logs for kube-scheduler [5c2af2bbc9dc] ...
	I1216 12:38:58.696519    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c2af2bbc9dc"
	I1216 12:38:58.714389    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:38:58.714400    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:39:01.220839    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:06.223473    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:06.223560    6375 kubeadm.go:597] duration metric: took 4m3.788947042s to restartPrimaryControlPlane
	W1216 12:39:06.223604    6375 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 12:39:06.223628    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 12:39:07.285209    6375 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06156075s)
	I1216 12:39:07.285285    6375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:39:07.290322    6375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 12:39:07.293082    6375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:39:07.296178    6375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:39:07.296184    6375 kubeadm.go:157] found existing configuration files:
	
	I1216 12:39:07.296212    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf
	I1216 12:39:07.299401    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:39:07.299430    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:39:07.302203    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf
	I1216 12:39:07.304548    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:39:07.304573    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:39:07.307671    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf
	I1216 12:39:07.310537    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:39:07.310589    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:39:07.313500    6375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf
	I1216 12:39:07.316100    6375 kubeadm.go:163] "https://control-plane.minikube.internal:51022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:39:07.316135    6375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:39:07.319326    6375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:39:07.335956    6375 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1216 12:39:07.335999    6375 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:39:07.388293    6375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:39:07.388352    6375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:39:07.388443    6375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:39:07.441949    6375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:39:07.448046    6375 out.go:235]   - Generating certificates and keys ...
	I1216 12:39:07.448082    6375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:39:07.448111    6375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:39:07.448153    6375 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:39:07.448185    6375 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:39:07.448221    6375 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:39:07.448251    6375 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:39:07.448285    6375 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:39:07.448321    6375 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:39:07.448364    6375 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:39:07.448395    6375 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:39:07.448414    6375 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:39:07.448446    6375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:39:07.490439    6375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:39:07.687701    6375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:39:07.807208    6375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:39:07.888744    6375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:39:07.918264    6375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:39:07.918750    6375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:39:07.918773    6375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:39:07.990035    6375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:39:07.994205    6375 out.go:235]   - Booting up control plane ...
	I1216 12:39:07.994253    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:39:07.994295    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:39:07.994358    6375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:39:07.994414    6375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:39:07.994549    6375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:39:12.495273    6375 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501540 seconds
	I1216 12:39:12.495353    6375 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 12:39:12.499086    6375 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 12:39:13.005797    6375 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 12:39:13.005912    6375 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-349000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 12:39:13.509552    6375 kubeadm.go:310] [bootstrap-token] Using token: hrztia.rg8izit14ku9t5ga
	I1216 12:39:13.515743    6375 out.go:235]   - Configuring RBAC rules ...
	I1216 12:39:13.515801    6375 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 12:39:13.515852    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 12:39:13.523751    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 12:39:13.525835    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 12:39:13.526694    6375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 12:39:13.527438    6375 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 12:39:13.530596    6375 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 12:39:13.695460    6375 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 12:39:13.914509    6375 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 12:39:13.915078    6375 kubeadm.go:310] 
	I1216 12:39:13.915114    6375 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 12:39:13.915119    6375 kubeadm.go:310] 
	I1216 12:39:13.915163    6375 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 12:39:13.915169    6375 kubeadm.go:310] 
	I1216 12:39:13.915233    6375 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 12:39:13.915267    6375 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 12:39:13.915354    6375 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 12:39:13.915360    6375 kubeadm.go:310] 
	I1216 12:39:13.915390    6375 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 12:39:13.915417    6375 kubeadm.go:310] 
	I1216 12:39:13.915451    6375 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 12:39:13.915488    6375 kubeadm.go:310] 
	I1216 12:39:13.915516    6375 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 12:39:13.915576    6375 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 12:39:13.915649    6375 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 12:39:13.915663    6375 kubeadm.go:310] 
	I1216 12:39:13.915767    6375 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 12:39:13.915811    6375 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 12:39:13.915814    6375 kubeadm.go:310] 
	I1216 12:39:13.915885    6375 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hrztia.rg8izit14ku9t5ga \
	I1216 12:39:13.915936    6375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f \
	I1216 12:39:13.915952    6375 kubeadm.go:310] 	--control-plane 
	I1216 12:39:13.915955    6375 kubeadm.go:310] 
	I1216 12:39:13.915992    6375 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 12:39:13.915994    6375 kubeadm.go:310] 
	I1216 12:39:13.916033    6375 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hrztia.rg8izit14ku9t5ga \
	I1216 12:39:13.916082    6375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77b6eee289b51dced98f77757331e009228628d0dcb7ad47ffc742a9fad2ab5f 
	I1216 12:39:13.916213    6375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:39:13.916246    6375 cni.go:84] Creating CNI manager for ""
	I1216 12:39:13.916254    6375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:39:13.920605    6375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 12:39:13.927647    6375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 12:39:13.931390    6375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 12:39:13.937819    6375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 12:39:13.937938    6375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-349000 minikube.k8s.io/updated_at=2024_12_16T12_39_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=stopped-upgrade-349000 minikube.k8s.io/primary=true
	I1216 12:39:13.937969    6375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 12:39:13.943200    6375 ops.go:34] apiserver oom_adj: -16
	I1216 12:39:13.987290    6375 kubeadm.go:1113] duration metric: took 49.449459ms to wait for elevateKubeSystemPrivileges
	I1216 12:39:13.987307    6375 kubeadm.go:394] duration metric: took 4m11.566088709s to StartCluster
	I1216 12:39:13.987319    6375 settings.go:142] acquiring lock: {Name:mk8b3a21b6dc2a47a05d302a72ae4dd9a4679c92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:39:13.987417    6375 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:39:13.987868    6375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/kubeconfig: {Name:mk5db459efe4751fc2fdac6b17566890a2cc1c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:39:13.988069    6375 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:39:13.988092    6375 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 12:39:13.988168    6375 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:39:13.988176    6375 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-349000"
	I1216 12:39:13.988184    6375 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-349000"
	W1216 12:39:13.988187    6375 addons.go:243] addon storage-provisioner should already be in state true
	I1216 12:39:13.988229    6375 host.go:66] Checking if "stopped-upgrade-349000" exists ...
	I1216 12:39:13.988196    6375 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-349000"
	I1216 12:39:13.988247    6375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-349000"
	I1216 12:39:13.990583    6375 out.go:177] * Verifying Kubernetes components...
	I1216 12:39:13.991366    6375 kapi.go:59] client config for stopped-upgrade-349000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/profiles/stopped-upgrade-349000/client.key", CAFile:"/Users/jenkins/minikube-integration/20091-990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106cfef70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 12:39:13.994875    6375 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-349000"
	W1216 12:39:13.994880    6375 addons.go:243] addon default-storageclass should already be in state true
	I1216 12:39:13.994891    6375 host.go:66] Checking if "stopped-upgrade-349000" exists ...
	I1216 12:39:13.995445    6375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 12:39:13.995450    6375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 12:39:13.995455    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:39:13.998633    6375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 12:39:14.001719    6375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 12:39:14.004656    6375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:39:14.004662    6375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 12:39:14.004667    6375 sshutil.go:53] new ssh client: &{IP:localhost Port:50988 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/stopped-upgrade-349000/id_rsa Username:docker}
	I1216 12:39:14.076120    6375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 12:39:14.081936    6375 api_server.go:52] waiting for apiserver process to appear ...
	I1216 12:39:14.082010    6375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 12:39:14.085788    6375 api_server.go:72] duration metric: took 97.707041ms to wait for apiserver process to appear ...
	I1216 12:39:14.085796    6375 api_server.go:88] waiting for apiserver healthz status ...
	I1216 12:39:14.085804    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:14.092252    6375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 12:39:14.110510    6375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 12:39:14.463400    6375 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 12:39:14.463413    6375 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 12:39:19.087937    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:19.087983    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:24.088292    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:24.088334    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:29.088735    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:29.088763    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:34.089255    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:34.089297    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:39.089968    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:39.090010    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:44.090872    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:44.090929    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1216 12:39:44.466028    6375 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1216 12:39:44.470263    6375 out.go:177] * Enabled addons: storage-provisioner
	I1216 12:39:44.477164    6375 addons.go:510] duration metric: took 30.488819125s for enable addons: enabled=[storage-provisioner]
	I1216 12:39:49.092099    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:49.092175    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:54.093801    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:54.093853    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:39:59.094349    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:39:59.094418    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:04.096512    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:04.096538    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:09.096708    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:09.096731    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:14.089517    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:14.089649    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:14.102902    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:14.102991    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:14.113792    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:14.113866    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:14.124171    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:14.124246    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:14.139523    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:14.139599    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:14.150276    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:14.150354    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:14.160418    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:14.160491    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:14.170419    6375 logs.go:282] 0 containers: []
	W1216 12:40:14.170433    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:14.170499    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:14.180741    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:14.180756    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:14.180762    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:14.217943    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:14.217958    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:14.232271    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:14.232284    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:14.246359    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:14.246370    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:14.258447    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:14.258457    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:14.283779    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:14.283797    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:14.295790    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:14.295801    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:14.300502    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:14.300512    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:14.312337    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:14.312351    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:14.327509    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:14.327518    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:14.339706    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:14.339716    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:14.357616    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:14.357632    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:14.374551    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:14.374566    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:16.907033    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:21.903749    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:21.903921    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:21.920054    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:21.920151    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:21.932440    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:21.932515    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:21.943373    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:21.943450    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:21.954211    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:21.954295    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:21.968654    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:21.968730    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:21.979005    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:21.979079    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:21.989629    6375 logs.go:282] 0 containers: []
	W1216 12:40:21.989640    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:21.989699    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:21.999779    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:21.999795    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:21.999800    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:22.024710    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:22.024722    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:22.029197    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:22.029206    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:22.043246    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:22.043261    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:22.055188    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:22.055199    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:22.066443    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:22.066454    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:22.081639    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:22.081652    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:22.095610    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:22.095624    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:22.106897    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:22.106909    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:22.119843    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:22.119853    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:22.152833    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:22.152840    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:22.187580    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:22.187591    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:22.201978    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:22.201989    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:24.720244    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:29.719345    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:29.719922    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:29.766218    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:29.766362    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:29.786167    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:29.786272    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:29.800135    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:29.800217    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:29.811825    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:29.811903    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:29.822932    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:29.823012    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:29.834652    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:29.834733    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:29.845035    6375 logs.go:282] 0 containers: []
	W1216 12:40:29.845052    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:29.845108    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:29.855788    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:29.855803    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:29.855809    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:29.873553    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:29.873566    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:29.891698    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:29.891714    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:29.903769    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:29.903781    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:29.928314    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:29.928327    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:29.942420    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:29.942434    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:29.975504    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:29.975511    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:30.010643    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:30.010659    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:30.022719    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:30.022730    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:30.034005    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:30.034020    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:30.045261    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:30.045274    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:30.068768    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:30.068775    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:30.072900    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:30.072909    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:32.591698    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:37.591137    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:37.591631    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:37.628124    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:37.628279    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:37.648977    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:37.649081    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:37.664265    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:37.664351    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:37.676786    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:37.676858    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:37.687956    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:37.688037    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:37.698477    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:37.698556    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:37.708816    6375 logs.go:282] 0 containers: []
	W1216 12:40:37.708834    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:37.708895    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:37.723287    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:37.723304    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:37.723310    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:37.735174    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:37.735186    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:37.770043    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:37.770057    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:37.774435    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:37.774441    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:37.789069    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:37.789080    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:37.803815    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:37.803826    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:37.815515    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:37.815525    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:37.827436    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:37.827449    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:37.862554    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:37.862567    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:37.877428    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:37.877440    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:37.889093    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:37.889106    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:37.906479    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:37.906491    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:37.930324    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:37.930331    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:40.443230    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:45.444370    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:45.444482    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:45.462184    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:45.462265    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:45.482395    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:45.482462    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:45.495802    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:45.495878    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:45.508845    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:45.508914    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:45.519907    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:45.519974    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:45.531137    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:45.531212    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:45.542080    6375 logs.go:282] 0 containers: []
	W1216 12:40:45.542088    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:45.542150    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:45.552930    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:45.552943    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:45.552960    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:45.569553    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:45.569563    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:45.584089    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:45.584098    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:45.595938    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:45.595949    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:45.608008    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:45.608019    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:45.612407    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:45.612418    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:45.653485    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:45.653496    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:45.665012    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:45.665022    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:45.680968    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:45.680979    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:45.698169    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:45.698178    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:45.716677    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:45.716688    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:45.739409    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:45.739417    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:45.750735    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:45.750746    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:48.286830    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:40:53.288796    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:40:53.289338    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:40:53.329892    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:40:53.330040    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:40:53.353613    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:40:53.353754    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:40:53.369203    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:40:53.369296    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:40:53.381451    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:40:53.381533    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:40:53.392910    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:40:53.392983    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:40:53.403967    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:40:53.404036    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:40:53.413908    6375 logs.go:282] 0 containers: []
	W1216 12:40:53.413919    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:40:53.413971    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:40:53.424567    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:40:53.424584    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:40:53.424589    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:40:53.439258    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:40:53.439267    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:40:53.451146    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:40:53.451157    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:40:53.462392    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:40:53.462404    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:40:53.475138    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:40:53.475148    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:40:53.508643    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:40:53.508650    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:40:53.513245    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:40:53.513252    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:40:53.552122    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:40:53.552133    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:40:53.568488    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:40:53.568499    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:40:53.580370    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:40:53.580383    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:40:53.596581    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:40:53.596591    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:40:53.614058    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:40:53.614068    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:40:53.626179    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:40:53.626191    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:40:56.150457    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:01.152281    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:01.152814    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:01.194238    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:01.194389    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:01.216232    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:01.216352    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:01.233635    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:01.233721    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:01.246397    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:01.246474    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:01.257031    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:01.257097    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:01.268587    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:01.268664    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:01.287153    6375 logs.go:282] 0 containers: []
	W1216 12:41:01.287164    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:01.287226    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:01.298045    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:01.298060    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:01.298066    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:01.309746    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:01.309756    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:01.321955    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:01.321967    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:01.355729    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:01.355737    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:01.391252    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:01.391265    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:01.403065    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:01.403078    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:01.418677    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:01.418687    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:01.436631    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:01.436644    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:01.448588    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:01.448602    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:01.473150    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:01.473159    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:01.484872    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:01.484886    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:01.489443    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:01.489450    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:01.503608    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:01.503621    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:04.026576    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:09.028784    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:09.029079    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:09.053639    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:09.053765    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:09.075425    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:09.075504    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:09.087923    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:09.088000    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:09.098836    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:09.098915    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:09.109611    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:09.109689    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:09.120108    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:09.120180    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:09.131175    6375 logs.go:282] 0 containers: []
	W1216 12:41:09.131184    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:09.131240    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:09.141957    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:09.141975    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:09.141981    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:09.156523    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:09.156534    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:09.170667    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:09.170680    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:09.182168    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:09.182181    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:09.199515    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:09.199528    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:09.211574    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:09.211587    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:09.223387    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:09.223400    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:09.228136    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:09.228145    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:09.269373    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:09.269387    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:09.281192    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:09.281203    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:09.299839    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:09.299849    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:09.311970    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:09.311983    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:09.336264    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:09.336270    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:11.873497    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:16.874288    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:16.874523    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:16.893442    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:16.893541    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:16.907003    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:16.907076    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:16.919928    6375 logs.go:282] 2 containers: [ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:16.920010    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:16.931714    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:16.931800    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:16.942574    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:16.942644    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:16.953221    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:16.953298    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:16.964297    6375 logs.go:282] 0 containers: []
	W1216 12:41:16.964310    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:16.964377    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:16.975503    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:16.975516    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:16.975521    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:16.990398    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:16.990410    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:17.002448    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:17.002461    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:17.017686    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:17.017699    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:17.035487    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:17.035498    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:17.047491    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:17.047502    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:17.059555    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:17.059565    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:17.084255    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:17.084263    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:17.119084    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:17.119094    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:17.123265    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:17.123274    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:17.159482    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:17.159493    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:17.174672    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:17.174685    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:17.190969    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:17.190984    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:19.705534    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:24.707979    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:24.708165    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:24.720603    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:24.720681    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:24.731413    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:24.731495    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:24.742005    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:24.742086    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:24.752685    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:24.752750    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:24.762959    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:24.763044    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:24.773468    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:24.773545    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:24.783671    6375 logs.go:282] 0 containers: []
	W1216 12:41:24.783683    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:24.783741    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:24.794470    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:24.794492    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:24.794499    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:24.807181    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:24.807193    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:24.819239    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:24.819252    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:24.843999    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:24.844007    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:24.866345    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:24.866359    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:24.878978    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:24.878992    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:24.914336    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:41:24.914348    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:41:24.929586    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:41:24.929599    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:41:24.943584    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:24.943601    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:24.979623    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:24.979630    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:24.997802    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:24.997811    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:25.015096    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:25.015108    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:25.027712    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:25.027726    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:25.043715    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:25.043728    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:25.055859    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:25.055874    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:27.571652    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:32.573909    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:32.574420    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:32.607936    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:32.608082    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:32.627699    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:32.627800    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:32.642954    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:32.643027    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:32.655163    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:32.655232    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:32.670239    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:32.670318    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:32.680921    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:32.680996    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:32.690847    6375 logs.go:282] 0 containers: []
	W1216 12:41:32.690860    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:32.690921    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:32.703506    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:32.703526    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:32.703533    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:32.715682    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:32.715693    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:32.731337    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:32.731348    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:32.745495    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:41:32.745505    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:41:32.756670    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:32.756681    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:32.767956    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:41:32.767965    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:41:32.779132    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:32.779144    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:32.797454    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:32.797467    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:32.811159    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:32.811172    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:32.834573    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:32.834589    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:32.847022    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:32.847036    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:32.851869    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:32.851875    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:32.887318    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:32.887327    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:32.899477    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:32.899488    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:32.932765    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:32.932774    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:35.446489    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:40.449292    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:40.449866    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:40.489429    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:40.489585    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:40.511473    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:40.511597    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:40.526642    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:40.526739    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:40.539373    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:40.539451    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:40.550705    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:40.550774    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:40.563875    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:40.563950    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:40.575018    6375 logs.go:282] 0 containers: []
	W1216 12:41:40.575031    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:40.575098    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:40.585734    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:40.585750    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:40.585756    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:40.600133    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:40.600146    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:40.612109    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:40.612122    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:40.645607    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:41:40.645621    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:41:40.657714    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:40.657724    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:40.670323    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:40.670337    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:40.691140    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:40.691152    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:40.714409    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:40.714419    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:40.726776    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:40.726785    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:40.731489    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:41:40.731497    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:41:40.743192    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:40.743207    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:40.755447    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:40.755460    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:40.774910    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:40.774919    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:40.789883    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:40.789893    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:40.801653    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:40.801665    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:43.338738    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:48.341406    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:48.341487    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:48.353180    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:48.353246    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:48.364127    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:48.364206    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:48.376136    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:48.376211    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:48.389912    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:48.389969    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:48.401049    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:48.401115    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:48.412333    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:48.412410    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:48.424665    6375 logs.go:282] 0 containers: []
	W1216 12:41:48.424684    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:48.424768    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:48.436493    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:48.436508    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:48.436515    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:48.448499    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:48.448512    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:48.461968    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:48.461979    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:48.478281    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:48.478294    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:48.492596    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:48.492603    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:48.510955    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:48.510972    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:48.548559    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:48.548571    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:48.567121    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:48.567135    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:48.581816    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:48.581825    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:48.606651    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:48.606670    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:48.620639    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:48.620650    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:48.656213    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:48.656228    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:48.671877    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:41:48.671891    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:41:48.684083    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:48.684093    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:48.688712    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:41:48.688722    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:41:51.205145    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:41:56.206402    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:41:56.206976    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:41:56.244886    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:41:56.245034    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:41:56.262705    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:41:56.262808    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:41:56.277217    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:41:56.277303    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:41:56.288274    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:41:56.288353    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:41:56.298810    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:41:56.298890    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:41:56.309251    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:41:56.309327    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:41:56.319112    6375 logs.go:282] 0 containers: []
	W1216 12:41:56.319124    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:41:56.319188    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:41:56.329917    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:41:56.329936    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:41:56.329942    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:41:56.347285    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:41:56.347299    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:41:56.359597    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:41:56.359611    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:41:56.384007    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:41:56.384015    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:41:56.422777    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:41:56.422788    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:41:56.437216    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:41:56.437224    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:41:56.454456    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:41:56.454467    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:41:56.467102    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:41:56.467117    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:41:56.481385    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:41:56.481399    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:41:56.505995    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:41:56.506009    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:41:56.523995    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:41:56.524009    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:41:56.540807    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:41:56.540824    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:41:56.560850    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:41:56.560873    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:41:56.597026    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:41:56.597050    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:41:56.603283    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:41:56.603296    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:41:59.121318    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:04.123612    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:04.124111    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:04.158559    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:04.158698    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:04.183487    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:04.183584    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:04.196623    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:04.196706    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:04.207507    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:04.207603    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:04.221983    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:04.222066    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:04.232777    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:04.232855    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:04.247935    6375 logs.go:282] 0 containers: []
	W1216 12:42:04.247946    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:04.248003    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:04.258805    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:04.258820    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:04.258825    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:04.263813    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:04.263822    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:04.278573    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:04.278586    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:04.314749    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:04.314759    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:04.332925    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:04.332935    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:04.344987    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:04.344997    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:04.356292    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:04.356303    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:04.371161    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:04.371171    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:04.388620    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:04.388629    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:04.404125    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:04.404140    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:04.439540    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:04.439553    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:04.465033    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:04.465040    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:04.476631    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:04.476643    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:04.488097    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:04.488109    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:04.500006    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:04.500018    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:07.013714    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:12.014585    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:12.014687    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:12.026157    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:12.026224    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:12.036798    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:12.036863    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:12.048672    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:12.048745    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:12.060219    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:12.060296    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:12.071799    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:12.071851    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:12.089712    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:12.089784    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:12.113682    6375 logs.go:282] 0 containers: []
	W1216 12:42:12.113693    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:12.113742    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:12.133774    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:12.133790    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:12.133798    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:12.138816    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:12.138826    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:12.155144    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:12.155155    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:12.167914    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:12.167926    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:12.179955    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:12.179966    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:12.199528    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:12.199540    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:12.218232    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:12.218247    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:12.244271    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:12.244291    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:12.281280    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:12.281301    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:12.293723    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:12.293740    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:12.306827    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:12.306839    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:12.344356    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:12.344370    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:12.357375    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:12.357387    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:12.372795    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:12.372814    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:12.386332    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:12.386346    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:14.903838    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:19.906747    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:19.907358    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:19.946765    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:19.946961    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:19.974879    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:19.974989    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:19.989324    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:19.989420    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:20.001105    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:20.001173    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:20.015611    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:20.015680    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:20.026484    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:20.026561    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:20.036218    6375 logs.go:282] 0 containers: []
	W1216 12:42:20.036228    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:20.036282    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:20.047646    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:20.047665    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:20.047673    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:20.052426    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:20.052436    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:20.070393    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:20.070404    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:20.081571    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:20.081585    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:20.093120    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:20.093133    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:20.103983    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:20.103996    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:20.115544    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:20.115556    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:20.127463    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:20.127476    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:20.142295    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:20.142309    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:20.177479    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:20.177494    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:20.195668    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:20.195679    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:20.210005    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:20.210017    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:20.234873    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:20.234884    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:20.269747    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:20.269755    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:20.281437    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:20.281449    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:22.799327    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:27.802299    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:27.802560    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:27.831052    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:27.831194    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:27.849704    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:27.849787    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:27.863229    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:27.863328    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:27.875563    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:27.875645    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:27.886295    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:27.886376    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:27.896589    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:27.896660    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:27.906793    6375 logs.go:282] 0 containers: []
	W1216 12:42:27.906802    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:27.906857    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:27.917095    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:27.917111    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:27.917116    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:27.931173    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:27.931188    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:27.942506    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:27.942520    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:27.953737    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:27.953748    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:27.970903    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:27.970914    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:27.982858    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:27.982872    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:27.987620    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:27.987628    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:28.002184    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:28.002196    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:28.016286    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:28.016298    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:28.028385    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:28.028398    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:28.051399    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:28.051406    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:28.084845    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:28.084855    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:28.118548    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:28.118559    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:28.135835    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:28.135848    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:28.147371    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:28.147381    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:30.660774    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:35.662076    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:35.662684    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:35.701311    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:35.701464    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:35.722287    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:35.722396    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:35.737501    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:35.737595    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:35.749994    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:35.750073    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:35.763498    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:35.763579    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:35.774411    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:35.774484    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:35.784386    6375 logs.go:282] 0 containers: []
	W1216 12:42:35.784396    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:35.784460    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:35.794972    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:35.794991    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:35.794996    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:35.830750    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:35.830759    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:35.842874    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:35.842883    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:35.861151    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:35.861160    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:35.882909    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:35.882918    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:35.901071    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:35.901084    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:35.924894    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:35.924906    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:35.937520    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:35.937530    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:35.953850    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:35.953864    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:35.966737    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:35.966748    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:35.971637    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:35.971647    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:36.010500    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:36.010512    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:36.023771    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:36.023782    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:36.036897    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:36.036908    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:36.062187    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:36.062204    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:38.576706    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:43.578962    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:43.579432    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:43.609094    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:43.609239    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:43.627827    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:43.627924    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:43.641728    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:43.641809    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:43.653584    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:43.653660    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:43.664102    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:43.664166    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:43.674455    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:43.674534    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:43.684371    6375 logs.go:282] 0 containers: []
	W1216 12:42:43.684382    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:43.684439    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:43.695379    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:43.695402    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:43.695408    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:43.710582    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:43.710599    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:43.745725    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:43.745737    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:43.783247    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:43.783258    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:43.798275    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:43.798286    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:43.809800    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:43.809811    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:43.827254    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:43.827264    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:43.851554    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:43.851561    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:43.869835    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:43.869846    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:43.881629    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:43.881639    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:43.893047    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:43.893058    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:43.908598    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:43.908611    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:43.920534    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:43.920545    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:43.931884    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:43.931893    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:43.936538    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:43.936546    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:46.452601    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:51.455436    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:51.455972    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:51.491994    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:51.492138    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:51.518215    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:51.518347    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:51.532388    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:51.532479    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:51.544571    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:51.544651    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:51.556835    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:51.556911    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:51.567721    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:51.567797    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:51.581805    6375 logs.go:282] 0 containers: []
	W1216 12:42:51.581817    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:51.581902    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:51.592104    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:51.592124    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:51.592131    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:51.627223    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:51.627232    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:51.631910    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:51.631918    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:51.667099    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:51.667110    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:51.681398    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:51.681410    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:51.692890    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:51.692901    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:51.704675    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:51.704687    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:42:51.719115    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:51.719128    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:51.737582    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:51.737600    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:51.767700    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:51.767712    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:51.780802    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:51.780813    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:51.798057    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:51.798068    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:51.821966    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:51.821974    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:51.836936    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:51.836948    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:51.849107    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:51.849117    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:54.362966    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:42:59.365516    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:42:59.366077    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:42:59.404891    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:42:59.405032    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:42:59.425660    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:42:59.425770    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:42:59.441171    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:42:59.441264    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:42:59.453947    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:42:59.454019    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:42:59.464725    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:42:59.464807    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:42:59.475118    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:42:59.475202    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:42:59.486525    6375 logs.go:282] 0 containers: []
	W1216 12:42:59.486535    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:42:59.486596    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:42:59.496905    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:42:59.496925    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:42:59.496931    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:42:59.509293    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:42:59.509306    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:42:59.521990    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:42:59.522002    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:42:59.541532    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:42:59.541543    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:42:59.565647    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:42:59.565657    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:42:59.576901    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:42:59.576913    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:42:59.609649    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:42:59.609656    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:42:59.623102    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:42:59.623112    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:42:59.635078    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:42:59.635088    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:42:59.639285    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:42:59.639294    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:42:59.651482    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:42:59.651494    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:42:59.665860    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:42:59.665870    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:42:59.681274    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:42:59.681285    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:42:59.698862    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:42:59.698872    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:42:59.734573    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:42:59.734588    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:43:02.252719    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:43:07.255058    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:43:07.255540    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 12:43:07.292965    6375 logs.go:282] 1 containers: [12673511f958]
	I1216 12:43:07.293114    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 12:43:07.314558    6375 logs.go:282] 1 containers: [060d3a51aea3]
	I1216 12:43:07.314672    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 12:43:07.330796    6375 logs.go:282] 4 containers: [2299368944d0 c69aadb7b850 ed547f32e68c d16cd6e9cfc5]
	I1216 12:43:07.330893    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 12:43:07.343712    6375 logs.go:282] 1 containers: [ddaf711d8107]
	I1216 12:43:07.343792    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 12:43:07.355124    6375 logs.go:282] 1 containers: [07aa1236f71c]
	I1216 12:43:07.355197    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 12:43:07.365917    6375 logs.go:282] 1 containers: [5a2a54c8bb11]
	I1216 12:43:07.365992    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 12:43:07.376040    6375 logs.go:282] 0 containers: []
	W1216 12:43:07.376058    6375 logs.go:284] No container was found matching "kindnet"
	I1216 12:43:07.376123    6375 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 12:43:07.387131    6375 logs.go:282] 1 containers: [dc3f2ca127f4]
	I1216 12:43:07.387153    6375 logs.go:123] Gathering logs for coredns [2299368944d0] ...
	I1216 12:43:07.387158    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2299368944d0"
	I1216 12:43:07.399084    6375 logs.go:123] Gathering logs for coredns [c69aadb7b850] ...
	I1216 12:43:07.399097    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c69aadb7b850"
	I1216 12:43:07.410899    6375 logs.go:123] Gathering logs for container status ...
	I1216 12:43:07.410911    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 12:43:07.422301    6375 logs.go:123] Gathering logs for kube-apiserver [12673511f958] ...
	I1216 12:43:07.422312    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12673511f958"
	I1216 12:43:07.437236    6375 logs.go:123] Gathering logs for etcd [060d3a51aea3] ...
	I1216 12:43:07.437244    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060d3a51aea3"
	I1216 12:43:07.451313    6375 logs.go:123] Gathering logs for coredns [ed547f32e68c] ...
	I1216 12:43:07.451324    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed547f32e68c"
	I1216 12:43:07.463090    6375 logs.go:123] Gathering logs for coredns [d16cd6e9cfc5] ...
	I1216 12:43:07.463111    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d16cd6e9cfc5"
	I1216 12:43:07.478275    6375 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:43:07.478285    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 12:43:07.513266    6375 logs.go:123] Gathering logs for kube-scheduler [ddaf711d8107] ...
	I1216 12:43:07.513279    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaf711d8107"
	I1216 12:43:07.529820    6375 logs.go:123] Gathering logs for kubelet ...
	I1216 12:43:07.529830    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:43:07.564171    6375 logs.go:123] Gathering logs for dmesg ...
	I1216 12:43:07.564182    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:43:07.568562    6375 logs.go:123] Gathering logs for storage-provisioner [dc3f2ca127f4] ...
	I1216 12:43:07.568572    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc3f2ca127f4"
	I1216 12:43:07.580814    6375 logs.go:123] Gathering logs for Docker ...
	I1216 12:43:07.580827    6375 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 12:43:07.604276    6375 logs.go:123] Gathering logs for kube-proxy [07aa1236f71c] ...
	I1216 12:43:07.604283    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07aa1236f71c"
	I1216 12:43:07.616246    6375 logs.go:123] Gathering logs for kube-controller-manager [5a2a54c8bb11] ...
	I1216 12:43:07.616259    6375 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a2a54c8bb11"
	I1216 12:43:10.136387    6375 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1216 12:43:15.139160    6375 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 12:43:15.142634    6375 out.go:201] 
	W1216 12:43:15.145721    6375 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1216 12:43:15.145727    6375 out.go:270] * 
	* 
	W1216 12:43:15.146182    6375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:15.157755    6375 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-349000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.04s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-606000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-606000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.933702125s)

                                                
                                                
-- stdout --
	* [pause-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-606000" primary control-plane node in "pause-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-606000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-606000 -n pause-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-606000 -n pause-606000: exit status 7 (56.513167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 : exit status 80 (9.796195959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-861000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-861000" primary control-plane node in "NoKubernetes-861000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000: exit status 7 (46.533916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 : exit status 80 (5.271010333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-861000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-861000
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000: exit status 7 (48.39575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247153666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-861000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-861000
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000: exit status 7 (62.464834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 : exit status 80 (5.273956459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-861000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-861000
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-861000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-861000 -n NoKubernetes-861000: exit status 7 (64.068875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.058437s)

                                                
                                                
-- stdout --
	* [auto-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-838000" primary control-plane node in "auto-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:41:36.583826    6652 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:41:36.583978    6652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:41:36.583982    6652 out.go:358] Setting ErrFile to fd 2...
	I1216 12:41:36.583984    6652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:41:36.584104    6652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:41:36.585272    6652 out.go:352] Setting JSON to false
	I1216 12:41:36.603404    6652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4267,"bootTime":1734377429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:41:36.603475    6652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:41:36.609669    6652 out.go:177] * [auto-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:41:36.617655    6652 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:41:36.617711    6652 notify.go:220] Checking for updates...
	I1216 12:41:36.625448    6652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:41:36.628591    6652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:41:36.632640    6652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:41:36.640635    6652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:41:36.643625    6652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:41:36.647988    6652 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:41:36.648055    6652 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:41:36.648098    6652 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:41:36.651666    6652 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:41:36.658633    6652 start.go:297] selected driver: qemu2
	I1216 12:41:36.658640    6652 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:41:36.658646    6652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:41:36.660992    6652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:41:36.664633    6652 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:41:36.667759    6652 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:41:36.667779    6652 cni.go:84] Creating CNI manager for ""
	I1216 12:41:36.667804    6652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:41:36.667809    6652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:41:36.667851    6652 start.go:340] cluster config:
	{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:41:36.672237    6652 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:41:36.680717    6652 out.go:177] * Starting "auto-838000" primary control-plane node in "auto-838000" cluster
	I1216 12:41:36.684615    6652 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:41:36.684633    6652 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:41:36.684644    6652 cache.go:56] Caching tarball of preloaded images
	I1216 12:41:36.684724    6652 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:41:36.684729    6652 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:41:36.684790    6652 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/auto-838000/config.json ...
	I1216 12:41:36.684806    6652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/auto-838000/config.json: {Name:mk005221e7f7240d6d786428a212cb0b3b7afc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:41:36.685237    6652 start.go:360] acquireMachinesLock for auto-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:41:36.685279    6652 start.go:364] duration metric: took 36.75µs to acquireMachinesLock for "auto-838000"
	I1216 12:41:36.685291    6652 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.32.0 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:41:36.685325    6652 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:41:36.689695    6652 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:41:36.704123    6652 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I1216 12:41:36.704151    6652 client.go:168] LocalClient.Create starting
	I1216 12:41:36.704212    6652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:41:36.704252    6652 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:36.704260    6652 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:36.704299    6652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:41:36.704328    6652 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:36.704336    6652 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:36.704829    6652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:41:36.876082    6652 main.go:141] libmachine: Creating SSH key...
	I1216 12:41:36.992079    6652 main.go:141] libmachine: Creating Disk image...
	I1216 12:41:36.992092    6652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:41:36.992356    6652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:37.002380    6652 main.go:141] libmachine: STDOUT: 
	I1216 12:41:37.002401    6652 main.go:141] libmachine: STDERR: 
	I1216 12:41:37.002460    6652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I1216 12:41:37.011159    6652 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:41:37.011176    6652 main.go:141] libmachine: STDERR: 
	I1216 12:41:37.011199    6652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:37.011204    6652 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:41:37.011215    6652 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:41:37.011242    6652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:85:71:c8:60:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:37.012992    6652 main.go:141] libmachine: STDOUT: 
	I1216 12:41:37.013015    6652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:41:37.013034    6652 client.go:171] duration metric: took 308.879375ms to LocalClient.Create
	I1216 12:41:39.015115    6652 start.go:128] duration metric: took 2.329795667s to createHost
	I1216 12:41:39.015174    6652 start.go:83] releasing machines lock for "auto-838000", held for 2.329906334s
	W1216 12:41:39.015196    6652 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:39.022035    6652 out.go:177] * Deleting "auto-838000" in qemu2 ...
	W1216 12:41:39.042627    6652 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:39.042640    6652 start.go:729] Will try again in 5 seconds ...
	I1216 12:41:44.044803    6652 start.go:360] acquireMachinesLock for auto-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:41:44.045098    6652 start.go:364] duration metric: took 242.959µs to acquireMachinesLock for "auto-838000"
	I1216 12:41:44.045126    6652 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.32.0 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:41:44.045224    6652 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:41:44.054944    6652 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:41:44.081171    6652 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I1216 12:41:44.081207    6652 client.go:168] LocalClient.Create starting
	I1216 12:41:44.081289    6652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:41:44.081350    6652 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:44.081364    6652 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:44.081406    6652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:41:44.081445    6652 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:44.081455    6652 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:44.081887    6652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:41:44.254238    6652 main.go:141] libmachine: Creating SSH key...
	I1216 12:41:44.539893    6652 main.go:141] libmachine: Creating Disk image...
	I1216 12:41:44.539907    6652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:41:44.540143    6652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:44.550335    6652 main.go:141] libmachine: STDOUT: 
	I1216 12:41:44.550362    6652 main.go:141] libmachine: STDERR: 
	I1216 12:41:44.550447    6652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I1216 12:41:44.559738    6652 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:41:44.559756    6652 main.go:141] libmachine: STDERR: 
	I1216 12:41:44.559773    6652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:44.559780    6652 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:41:44.559786    6652 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:41:44.559821    6652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:38:38:47:07:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/auto-838000/disk.qcow2
	I1216 12:41:44.561896    6652 main.go:141] libmachine: STDOUT: 
	I1216 12:41:44.561910    6652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:41:44.561926    6652 client.go:171] duration metric: took 480.716167ms to LocalClient.Create
	I1216 12:41:46.564122    6652 start.go:128] duration metric: took 2.518879625s to createHost
	I1216 12:41:46.564259    6652 start.go:83] releasing machines lock for "auto-838000", held for 2.519159375s
	W1216 12:41:46.564554    6652 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:46.579358    6652 out.go:201] 
	W1216 12:41:46.582623    6652 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:41:46.582665    6652 out.go:270] * 
	* 
	W1216 12:41:46.584339    6652 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:41:46.600406    6652 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.858347709s)

                                                
                                                
-- stdout --
	* [kindnet-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-838000" primary control-plane node in "kindnet-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:41:49.016621    6763 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:41:49.016799    6763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:41:49.016803    6763 out.go:358] Setting ErrFile to fd 2...
	I1216 12:41:49.016805    6763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:41:49.016947    6763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:41:49.018253    6763 out.go:352] Setting JSON to false
	I1216 12:41:49.036901    6763 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4280,"bootTime":1734377429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:41:49.037015    6763 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:41:49.044016    6763 out.go:177] * [kindnet-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:41:49.051997    6763 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:41:49.052041    6763 notify.go:220] Checking for updates...
	I1216 12:41:49.059859    6763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:41:49.062935    6763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:41:49.065929    6763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:41:49.067024    6763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:41:49.069975    6763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:41:49.073396    6763 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:41:49.073469    6763 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:41:49.073513    6763 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:41:49.077820    6763 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:41:49.084973    6763 start.go:297] selected driver: qemu2
	I1216 12:41:49.084983    6763 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:41:49.084992    6763 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:41:49.087622    6763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:41:49.091855    6763 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:41:49.095059    6763 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:41:49.095083    6763 cni.go:84] Creating CNI manager for "kindnet"
	I1216 12:41:49.095090    6763 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 12:41:49.095124    6763 start.go:340] cluster config:
	{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:41:49.099599    6763 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:41:49.107896    6763 out.go:177] * Starting "kindnet-838000" primary control-plane node in "kindnet-838000" cluster
	I1216 12:41:49.111974    6763 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:41:49.111991    6763 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:41:49.112003    6763 cache.go:56] Caching tarball of preloaded images
	I1216 12:41:49.112093    6763 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:41:49.112098    6763 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:41:49.112175    6763 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kindnet-838000/config.json ...
	I1216 12:41:49.112186    6763 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kindnet-838000/config.json: {Name:mk607d45ef00bfb451260fa10eaf67f0fed0e078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:41:49.112629    6763 start.go:360] acquireMachinesLock for kindnet-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:41:49.112676    6763 start.go:364] duration metric: took 41.583µs to acquireMachinesLock for "kindnet-838000"
	I1216 12:41:49.112690    6763 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:41:49.112723    6763 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:41:49.120944    6763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:41:49.137337    6763 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I1216 12:41:49.137366    6763 client.go:168] LocalClient.Create starting
	I1216 12:41:49.137442    6763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:41:49.137479    6763 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:49.137489    6763 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:49.137541    6763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:41:49.137570    6763 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:49.137582    6763 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:49.137958    6763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:41:49.310225    6763 main.go:141] libmachine: Creating SSH key...
	I1216 12:41:49.369573    6763 main.go:141] libmachine: Creating Disk image...
	I1216 12:41:49.369579    6763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:41:49.369809    6763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:49.379667    6763 main.go:141] libmachine: STDOUT: 
	I1216 12:41:49.379687    6763 main.go:141] libmachine: STDERR: 
	I1216 12:41:49.379752    6763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I1216 12:41:49.388355    6763 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:41:49.388372    6763 main.go:141] libmachine: STDERR: 
	I1216 12:41:49.388389    6763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:49.388396    6763 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:41:49.388409    6763 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:41:49.388440    6763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:6f:7b:31:3d:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:49.390234    6763 main.go:141] libmachine: STDOUT: 
	I1216 12:41:49.390249    6763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:41:49.390267    6763 client.go:171] duration metric: took 252.897083ms to LocalClient.Create
	I1216 12:41:51.392356    6763 start.go:128] duration metric: took 2.279627791s to createHost
	I1216 12:41:51.392381    6763 start.go:83] releasing machines lock for "kindnet-838000", held for 2.279705375s
	W1216 12:41:51.392427    6763 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:51.408762    6763 out.go:177] * Deleting "kindnet-838000" in qemu2 ...
	W1216 12:41:51.436032    6763 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:51.436046    6763 start.go:729] Will try again in 5 seconds ...
	I1216 12:41:56.436123    6763 start.go:360] acquireMachinesLock for kindnet-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:41:56.436253    6763 start.go:364] duration metric: took 111µs to acquireMachinesLock for "kindnet-838000"
	I1216 12:41:56.436266    6763 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:41:56.436319    6763 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:41:56.447519    6763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:41:56.463491    6763 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I1216 12:41:56.463521    6763 client.go:168] LocalClient.Create starting
	I1216 12:41:56.463602    6763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:41:56.463641    6763 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:56.463654    6763 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:56.463693    6763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:41:56.463723    6763 main.go:141] libmachine: Decoding PEM data...
	I1216 12:41:56.463732    6763 main.go:141] libmachine: Parsing certificate...
	I1216 12:41:56.464135    6763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:41:56.636044    6763 main.go:141] libmachine: Creating SSH key...
	I1216 12:41:56.774335    6763 main.go:141] libmachine: Creating Disk image...
	I1216 12:41:56.774342    6763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:41:56.774577    6763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:56.784932    6763 main.go:141] libmachine: STDOUT: 
	I1216 12:41:56.784967    6763 main.go:141] libmachine: STDERR: 
	I1216 12:41:56.785038    6763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I1216 12:41:56.795111    6763 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:41:56.795140    6763 main.go:141] libmachine: STDERR: 
	I1216 12:41:56.795174    6763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:56.795180    6763 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:41:56.795193    6763 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:41:56.795226    6763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:78:4b:53:8e:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kindnet-838000/disk.qcow2
	I1216 12:41:56.797324    6763 main.go:141] libmachine: STDOUT: 
	I1216 12:41:56.797345    6763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:41:56.797357    6763 client.go:171] duration metric: took 333.833083ms to LocalClient.Create
	I1216 12:41:58.799559    6763 start.go:128] duration metric: took 2.363209125s to createHost
	I1216 12:41:58.799624    6763 start.go:83] releasing machines lock for "kindnet-838000", held for 2.363364833s
	W1216 12:41:58.799966    6763 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:41:58.808515    6763 out.go:201] 
	W1216 12:41:58.818763    6763 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:41:58.818799    6763 out.go:270] * 
	* 
	W1216 12:41:58.821604    6763 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:41:58.830450    6763 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.910733208s)

                                                
                                                
-- stdout --
	* [calico-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-838000" primary control-plane node in "calico-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:42:01.254993    6878 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:42:01.255146    6878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:01.255151    6878 out.go:358] Setting ErrFile to fd 2...
	I1216 12:42:01.255154    6878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:01.255288    6878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:42:01.256396    6878 out.go:352] Setting JSON to false
	I1216 12:42:01.274668    6878 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4292,"bootTime":1734377429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:42:01.274759    6878 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:42:01.280111    6878 out.go:177] * [calico-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:42:01.288055    6878 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:42:01.288103    6878 notify.go:220] Checking for updates...
	I1216 12:42:01.296957    6878 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:42:01.303850    6878 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:42:01.307949    6878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:42:01.310958    6878 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:42:01.312402    6878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:42:01.315240    6878 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:42:01.315313    6878 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:42:01.315367    6878 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:42:01.318969    6878 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:42:01.323887    6878 start.go:297] selected driver: qemu2
	I1216 12:42:01.323893    6878 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:42:01.323898    6878 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:42:01.326181    6878 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:42:01.329930    6878 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:42:01.331118    6878 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:42:01.331134    6878 cni.go:84] Creating CNI manager for "calico"
	I1216 12:42:01.331142    6878 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1216 12:42:01.331179    6878 start.go:340] cluster config:
	{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:42:01.335363    6878 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:42:01.343986    6878 out.go:177] * Starting "calico-838000" primary control-plane node in "calico-838000" cluster
	I1216 12:42:01.347923    6878 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:42:01.347938    6878 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:42:01.347950    6878 cache.go:56] Caching tarball of preloaded images
	I1216 12:42:01.348016    6878 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:42:01.348023    6878 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:42:01.348079    6878 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/calico-838000/config.json ...
	I1216 12:42:01.348091    6878 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/calico-838000/config.json: {Name:mk5e7bcb942934d80154922546acc84b9165b283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:42:01.348507    6878 start.go:360] acquireMachinesLock for calico-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:01.348548    6878 start.go:364] duration metric: took 36.5µs to acquireMachinesLock for "calico-838000"
	I1216 12:42:01.348560    6878 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:01.348589    6878 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:01.351998    6878 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:01.366764    6878 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I1216 12:42:01.366791    6878 client.go:168] LocalClient.Create starting
	I1216 12:42:01.366868    6878 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:01.366907    6878 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:01.366919    6878 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:01.366958    6878 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:01.366988    6878 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:01.366995    6878 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:01.367463    6878 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:01.536319    6878 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:01.571725    6878 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:01.571731    6878 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:01.571977    6878 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:01.581907    6878 main.go:141] libmachine: STDOUT: 
	I1216 12:42:01.581928    6878 main.go:141] libmachine: STDERR: 
	I1216 12:42:01.581990    6878 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I1216 12:42:01.590871    6878 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:01.590889    6878 main.go:141] libmachine: STDERR: 
	I1216 12:42:01.590911    6878 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:01.590917    6878 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:01.590930    6878 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:01.590960    6878 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:75:24:0a:06:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:01.592776    6878 main.go:141] libmachine: STDOUT: 
	I1216 12:42:01.592790    6878 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:01.592810    6878 client.go:171] duration metric: took 226.013208ms to LocalClient.Create
	I1216 12:42:03.594533    6878 start.go:128] duration metric: took 2.245919292s to createHost
	I1216 12:42:03.594597    6878 start.go:83] releasing machines lock for "calico-838000", held for 2.246045166s
	W1216 12:42:03.594644    6878 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:03.599297    6878 out.go:177] * Deleting "calico-838000" in qemu2 ...
	W1216 12:42:03.628121    6878 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:03.628144    6878 start.go:729] Will try again in 5 seconds ...
	I1216 12:42:08.630389    6878 start.go:360] acquireMachinesLock for calico-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:08.631105    6878 start.go:364] duration metric: took 590.875µs to acquireMachinesLock for "calico-838000"
	I1216 12:42:08.631250    6878 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:08.631548    6878 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:08.640975    6878 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:08.689914    6878 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I1216 12:42:08.689969    6878 client.go:168] LocalClient.Create starting
	I1216 12:42:08.690107    6878 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:08.690188    6878 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:08.690217    6878 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:08.690285    6878 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:08.690341    6878 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:08.690352    6878 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:08.690983    6878 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:08.871113    6878 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:09.059554    6878 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:09.059567    6878 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:09.059841    6878 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:09.070785    6878 main.go:141] libmachine: STDOUT: 
	I1216 12:42:09.070816    6878 main.go:141] libmachine: STDERR: 
	I1216 12:42:09.070888    6878 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I1216 12:42:09.079703    6878 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:09.079717    6878 main.go:141] libmachine: STDERR: 
	I1216 12:42:09.079731    6878 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:09.079737    6878 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:09.079747    6878 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:09.079784    6878 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:1f:18:f9:94:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/calico-838000/disk.qcow2
	I1216 12:42:09.081637    6878 main.go:141] libmachine: STDOUT: 
	I1216 12:42:09.081653    6878 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:09.081666    6878 client.go:171] duration metric: took 391.691ms to LocalClient.Create
	I1216 12:42:11.083875    6878 start.go:128] duration metric: took 2.452289875s to createHost
	I1216 12:42:11.083950    6878 start.go:83] releasing machines lock for "calico-838000", held for 2.452818584s
	W1216 12:42:11.084357    6878 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:11.101182    6878 out.go:201] 
	W1216 12:42:11.104243    6878 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:42:11.104307    6878 out.go:270] * 
	* 
	W1216 12:42:11.106956    6878 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:42:11.119141    6878 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.848275792s)

                                                
                                                
-- stdout --
	* [custom-flannel-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-838000" primary control-plane node in "custom-flannel-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:42:13.767799    7004 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:42:13.767982    7004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:13.767985    7004 out.go:358] Setting ErrFile to fd 2...
	I1216 12:42:13.767987    7004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:13.768097    7004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:42:13.769324    7004 out.go:352] Setting JSON to false
	I1216 12:42:13.787399    7004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4304,"bootTime":1734377429,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:42:13.787474    7004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:42:13.794550    7004 out.go:177] * [custom-flannel-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:42:13.804384    7004 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:42:13.804465    7004 notify.go:220] Checking for updates...
	I1216 12:42:13.812302    7004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:42:13.815219    7004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:42:13.819287    7004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:42:13.822303    7004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:42:13.825275    7004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:42:13.828634    7004 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:42:13.828707    7004 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:42:13.828756    7004 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:42:13.832316    7004 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:42:13.839306    7004 start.go:297] selected driver: qemu2
	I1216 12:42:13.839313    7004 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:42:13.839321    7004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:42:13.841698    7004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:42:13.845298    7004 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:42:13.848362    7004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:42:13.848377    7004 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 12:42:13.848384    7004 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1216 12:42:13.848410    7004 start.go:340] cluster config:
	{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:42:13.852636    7004 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:42:13.861270    7004 out.go:177] * Starting "custom-flannel-838000" primary control-plane node in "custom-flannel-838000" cluster
	I1216 12:42:13.865299    7004 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:42:13.865317    7004 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:42:13.865332    7004 cache.go:56] Caching tarball of preloaded images
	I1216 12:42:13.865410    7004 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:42:13.865415    7004 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:42:13.865479    7004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/custom-flannel-838000/config.json ...
	I1216 12:42:13.865489    7004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/custom-flannel-838000/config.json: {Name:mke72816820eb72247dd96115c6118c8eff88baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:42:13.865719    7004 start.go:360] acquireMachinesLock for custom-flannel-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:13.865761    7004 start.go:364] duration metric: took 36.75µs to acquireMachinesLock for "custom-flannel-838000"
	I1216 12:42:13.865774    7004 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:13.865804    7004 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:13.874274    7004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:13.888789    7004 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I1216 12:42:13.888819    7004 client.go:168] LocalClient.Create starting
	I1216 12:42:13.888896    7004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:13.888933    7004 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:13.888944    7004 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:13.888980    7004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:13.889009    7004 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:13.889017    7004 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:13.889406    7004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:14.058506    7004 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:14.151128    7004 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:14.151134    7004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:14.151400    7004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:14.161759    7004 main.go:141] libmachine: STDOUT: 
	I1216 12:42:14.161777    7004 main.go:141] libmachine: STDERR: 
	I1216 12:42:14.161839    7004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I1216 12:42:14.170557    7004 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:14.170572    7004 main.go:141] libmachine: STDERR: 
	I1216 12:42:14.170591    7004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:14.170596    7004 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:14.170609    7004 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:14.170635    7004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c4:3d:f3:ef:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:14.172503    7004 main.go:141] libmachine: STDOUT: 
	I1216 12:42:14.172518    7004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:14.172538    7004 client.go:171] duration metric: took 283.71475ms to LocalClient.Create
	I1216 12:42:16.174744    7004 start.go:128] duration metric: took 2.308904083s to createHost
	I1216 12:42:16.174870    7004 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.309100791s
	W1216 12:42:16.174934    7004 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:16.188345    7004 out.go:177] * Deleting "custom-flannel-838000" in qemu2 ...
	W1216 12:42:16.219154    7004 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:16.219188    7004 start.go:729] Will try again in 5 seconds ...
	I1216 12:42:21.221380    7004 start.go:360] acquireMachinesLock for custom-flannel-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:21.222034    7004 start.go:364] duration metric: took 560.083µs to acquireMachinesLock for "custom-flannel-838000"
	I1216 12:42:21.222182    7004 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:21.222523    7004 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:21.232958    7004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:21.279967    7004 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I1216 12:42:21.280015    7004 client.go:168] LocalClient.Create starting
	I1216 12:42:21.280231    7004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:21.280345    7004 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:21.280364    7004 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:21.280437    7004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:21.280498    7004 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:21.280509    7004 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:21.281164    7004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:21.461172    7004 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:21.503832    7004 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:21.503841    7004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:21.504077    7004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:21.514039    7004 main.go:141] libmachine: STDOUT: 
	I1216 12:42:21.514067    7004 main.go:141] libmachine: STDERR: 
	I1216 12:42:21.514128    7004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I1216 12:42:21.522699    7004 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:21.522714    7004 main.go:141] libmachine: STDERR: 
	I1216 12:42:21.522732    7004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:21.522737    7004 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:21.522748    7004 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:21.522779    7004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2e:af:c4:6e:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/custom-flannel-838000/disk.qcow2
	I1216 12:42:21.524634    7004 main.go:141] libmachine: STDOUT: 
	I1216 12:42:21.524649    7004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:21.524667    7004 client.go:171] duration metric: took 244.6475ms to LocalClient.Create
	I1216 12:42:23.526869    7004 start.go:128] duration metric: took 2.304306667s to createHost
	I1216 12:42:23.526959    7004 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.304899208s
	W1216 12:42:23.527306    7004 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:23.540347    7004 out.go:201] 
	W1216 12:42:23.546263    7004 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:42:23.546292    7004 out.go:270] * 
	* 
	W1216 12:42:23.548643    7004 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:42:23.569115    7004 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.780513792s)

                                                
                                                
-- stdout --
	* [false-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-838000" primary control-plane node in "false-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:42:26.135729    7125 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:42:26.135887    7125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:26.135890    7125 out.go:358] Setting ErrFile to fd 2...
	I1216 12:42:26.135893    7125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:26.136015    7125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:42:26.137266    7125 out.go:352] Setting JSON to false
	I1216 12:42:26.155250    7125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4317,"bootTime":1734377429,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:42:26.155319    7125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:42:26.162361    7125 out.go:177] * [false-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:42:26.170310    7125 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:42:26.170345    7125 notify.go:220] Checking for updates...
	I1216 12:42:26.179234    7125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:42:26.182290    7125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:42:26.186262    7125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:42:26.189250    7125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:42:26.192280    7125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:42:26.195647    7125 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:42:26.195728    7125 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:42:26.195776    7125 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:42:26.200215    7125 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:42:26.207237    7125 start.go:297] selected driver: qemu2
	I1216 12:42:26.207246    7125 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:42:26.207256    7125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:42:26.209881    7125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:42:26.213287    7125 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:42:26.217393    7125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:42:26.217415    7125 cni.go:84] Creating CNI manager for "false"
	I1216 12:42:26.217454    7125 start.go:340] cluster config:
	{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:42:26.222245    7125 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:42:26.230203    7125 out.go:177] * Starting "false-838000" primary control-plane node in "false-838000" cluster
	I1216 12:42:26.233175    7125 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:42:26.233190    7125 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:42:26.233201    7125 cache.go:56] Caching tarball of preloaded images
	I1216 12:42:26.233288    7125 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:42:26.233294    7125 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:42:26.233354    7125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/false-838000/config.json ...
	I1216 12:42:26.233366    7125 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/false-838000/config.json: {Name:mkfbcf0398b958651b0513cae50db5bb44fa2458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:42:26.233710    7125 start.go:360] acquireMachinesLock for false-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:26.233762    7125 start.go:364] duration metric: took 45.5µs to acquireMachinesLock for "false-838000"
	I1216 12:42:26.233774    7125 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:26.233800    7125 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:26.238284    7125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:26.253827    7125 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I1216 12:42:26.253855    7125 client.go:168] LocalClient.Create starting
	I1216 12:42:26.253932    7125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:26.253976    7125 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:26.253987    7125 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:26.254027    7125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:26.254056    7125 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:26.254063    7125 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:26.254462    7125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:26.422230    7125 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:26.470424    7125 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:26.470431    7125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:26.470669    7125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:26.480773    7125 main.go:141] libmachine: STDOUT: 
	I1216 12:42:26.480794    7125 main.go:141] libmachine: STDERR: 
	I1216 12:42:26.480852    7125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2 +20000M
	I1216 12:42:26.489316    7125 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:26.489332    7125 main.go:141] libmachine: STDERR: 
	I1216 12:42:26.489360    7125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:26.489366    7125 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:26.489378    7125 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:26.489413    7125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:7b:7b:af:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:26.491198    7125 main.go:141] libmachine: STDOUT: 
	I1216 12:42:26.491214    7125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:26.491233    7125 client.go:171] duration metric: took 237.372209ms to LocalClient.Create
	I1216 12:42:28.493395    7125 start.go:128] duration metric: took 2.259582584s to createHost
	I1216 12:42:28.493430    7125 start.go:83] releasing machines lock for "false-838000", held for 2.259662208s
	W1216 12:42:28.493471    7125 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:28.507122    7125 out.go:177] * Deleting "false-838000" in qemu2 ...
	W1216 12:42:28.531314    7125 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:28.531334    7125 start.go:729] Will try again in 5 seconds ...
	I1216 12:42:33.533636    7125 start.go:360] acquireMachinesLock for false-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:33.534309    7125 start.go:364] duration metric: took 554.75µs to acquireMachinesLock for "false-838000"
	I1216 12:42:33.534458    7125 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:33.534735    7125 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:33.545342    7125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:33.592336    7125 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I1216 12:42:33.592387    7125 client.go:168] LocalClient.Create starting
	I1216 12:42:33.592537    7125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:33.592625    7125 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:33.592644    7125 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:33.592706    7125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:33.592771    7125 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:33.592788    7125 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:33.593940    7125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:33.774553    7125 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:33.819939    7125 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:33.819947    7125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:33.820166    7125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:33.830389    7125 main.go:141] libmachine: STDOUT: 
	I1216 12:42:33.830421    7125 main.go:141] libmachine: STDERR: 
	I1216 12:42:33.830482    7125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2 +20000M
	I1216 12:42:33.839075    7125 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:33.839097    7125 main.go:141] libmachine: STDERR: 
	I1216 12:42:33.839109    7125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:33.839114    7125 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:33.839131    7125 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:33.839159    7125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:23:2e:ee:33:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/false-838000/disk.qcow2
	I1216 12:42:33.841023    7125 main.go:141] libmachine: STDOUT: 
	I1216 12:42:33.841044    7125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:33.841059    7125 client.go:171] duration metric: took 248.6645ms to LocalClient.Create
	I1216 12:42:35.842122    7125 start.go:128] duration metric: took 2.307346208s to createHost
	I1216 12:42:35.842155    7125 start.go:83] releasing machines lock for "false-838000", held for 2.307819958s
	W1216 12:42:35.842249    7125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:35.857545    7125 out.go:201] 
	W1216 12:42:35.863498    7125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:42:35.863503    7125 out.go:270] * 
	* 
	W1216 12:42:35.864019    7125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:42:35.876365    7125 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.872875042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-838000" primary control-plane node in "enable-default-cni-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:42:38.235186    7236 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:42:38.235357    7236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:38.235363    7236 out.go:358] Setting ErrFile to fd 2...
	I1216 12:42:38.235366    7236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:38.235513    7236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:42:38.236712    7236 out.go:352] Setting JSON to false
	I1216 12:42:38.255703    7236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4329,"bootTime":1734377429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:42:38.255781    7236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:42:38.264688    7236 out.go:177] * [enable-default-cni-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:42:38.273637    7236 notify.go:220] Checking for updates...
	I1216 12:42:38.278596    7236 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:42:38.282661    7236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:42:38.286599    7236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:42:38.289667    7236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:42:38.293665    7236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:42:38.296676    7236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:42:38.299998    7236 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:42:38.300083    7236 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:42:38.300131    7236 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:42:38.303640    7236 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:42:38.310570    7236 start.go:297] selected driver: qemu2
	I1216 12:42:38.310577    7236 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:42:38.310583    7236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:42:38.313068    7236 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:42:38.316646    7236 out.go:177] * Automatically selected the socket_vmnet network
	E1216 12:42:38.319637    7236 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1216 12:42:38.319650    7236 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:42:38.319663    7236 cni.go:84] Creating CNI manager for "bridge"
	I1216 12:42:38.319672    7236 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:42:38.319706    7236 start.go:340] cluster config:
	{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:42:38.324268    7236 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:42:38.332585    7236 out.go:177] * Starting "enable-default-cni-838000" primary control-plane node in "enable-default-cni-838000" cluster
	I1216 12:42:38.336584    7236 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:42:38.336599    7236 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:42:38.336607    7236 cache.go:56] Caching tarball of preloaded images
	I1216 12:42:38.336672    7236 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:42:38.336677    7236 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:42:38.336738    7236 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/enable-default-cni-838000/config.json ...
	I1216 12:42:38.336748    7236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/enable-default-cni-838000/config.json: {Name:mka5549ff5c9df16253f4636b3bb54a0ec6d138d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:42:38.337069    7236 start.go:360] acquireMachinesLock for enable-default-cni-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:38.337113    7236 start.go:364] duration metric: took 37.583µs to acquireMachinesLock for "enable-default-cni-838000"
	I1216 12:42:38.337123    7236 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.32.0 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:38.337164    7236 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:38.345627    7236 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:38.360172    7236 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I1216 12:42:38.360203    7236 client.go:168] LocalClient.Create starting
	I1216 12:42:38.360267    7236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:38.360305    7236 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:38.360315    7236 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:38.360355    7236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:38.360390    7236 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:38.360397    7236 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:38.360881    7236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:38.532241    7236 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:38.580422    7236 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:38.580428    7236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:38.580663    7236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:38.590572    7236 main.go:141] libmachine: STDOUT: 
	I1216 12:42:38.590596    7236 main.go:141] libmachine: STDERR: 
	I1216 12:42:38.590658    7236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I1216 12:42:38.599247    7236 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:38.599262    7236 main.go:141] libmachine: STDERR: 
	I1216 12:42:38.599279    7236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:38.599288    7236 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:38.599297    7236 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:38.599327    7236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:0b:ab:79:7b:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:38.601134    7236 main.go:141] libmachine: STDOUT: 
	I1216 12:42:38.601147    7236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:38.601165    7236 client.go:171] duration metric: took 240.95525ms to LocalClient.Create
	I1216 12:42:40.603391    7236 start.go:128] duration metric: took 2.266205292s to createHost
	I1216 12:42:40.603455    7236 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.26633425s
	W1216 12:42:40.603506    7236 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:40.614331    7236 out.go:177] * Deleting "enable-default-cni-838000" in qemu2 ...
	W1216 12:42:40.653723    7236 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:40.653903    7236 start.go:729] Will try again in 5 seconds ...
	I1216 12:42:45.656143    7236 start.go:360] acquireMachinesLock for enable-default-cni-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:45.656775    7236 start.go:364] duration metric: took 543.958µs to acquireMachinesLock for "enable-default-cni-838000"
	I1216 12:42:45.656918    7236 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.32.0 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:45.657238    7236 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:45.664986    7236 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:45.713420    7236 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I1216 12:42:45.713489    7236 client.go:168] LocalClient.Create starting
	I1216 12:42:45.713643    7236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:45.713736    7236 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:45.713753    7236 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:45.713820    7236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:45.713880    7236 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:45.713897    7236 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:45.714511    7236 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:45.894945    7236 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:46.002147    7236 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:46.002154    7236 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:46.002388    7236 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:46.012831    7236 main.go:141] libmachine: STDOUT: 
	I1216 12:42:46.012848    7236 main.go:141] libmachine: STDERR: 
	I1216 12:42:46.012909    7236 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I1216 12:42:46.021457    7236 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:46.021481    7236 main.go:141] libmachine: STDERR: 
	I1216 12:42:46.021502    7236 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:46.021507    7236 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:46.021514    7236 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:46.021541    7236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4d:3e:47:09:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I1216 12:42:46.023435    7236 main.go:141] libmachine: STDOUT: 
	I1216 12:42:46.023452    7236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:46.023465    7236 client.go:171] duration metric: took 309.9695ms to LocalClient.Create
	I1216 12:42:48.025675    7236 start.go:128] duration metric: took 2.368394958s to createHost
	I1216 12:42:48.025746    7236 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.368949125s
	W1216 12:42:48.026082    7236 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:48.038616    7236 out.go:201] 
	W1216 12:42:48.042753    7236 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:42:48.042780    7236 out.go:270] * 
	* 
	W1216 12:42:48.045542    7236 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:42:48.061597    7236 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.971376667s)

                                                
                                                
-- stdout --
	* [flannel-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-838000" primary control-plane node in "flannel-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:42:50.443113    7350 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:42:50.443276    7350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:50.443283    7350 out.go:358] Setting ErrFile to fd 2...
	I1216 12:42:50.443285    7350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:42:50.443434    7350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:42:50.444604    7350 out.go:352] Setting JSON to false
	I1216 12:42:50.462785    7350 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4341,"bootTime":1734377429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:42:50.462857    7350 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:42:50.468356    7350 out.go:177] * [flannel-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:42:50.476423    7350 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:42:50.476460    7350 notify.go:220] Checking for updates...
	I1216 12:42:50.483329    7350 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:42:50.486315    7350 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:42:50.489352    7350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:42:50.492333    7350 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:42:50.495350    7350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:42:50.498626    7350 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:42:50.498702    7350 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:42:50.498753    7350 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:42:50.503278    7350 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:42:50.510294    7350 start.go:297] selected driver: qemu2
	I1216 12:42:50.510302    7350 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:42:50.510311    7350 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:42:50.512833    7350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:42:50.516317    7350 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:42:50.520477    7350 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:42:50.520500    7350 cni.go:84] Creating CNI manager for "flannel"
	I1216 12:42:50.520504    7350 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1216 12:42:50.520542    7350 start.go:340] cluster config:
	{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:42:50.524994    7350 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:42:50.533336    7350 out.go:177] * Starting "flannel-838000" primary control-plane node in "flannel-838000" cluster
	I1216 12:42:50.537320    7350 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:42:50.537333    7350 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:42:50.537343    7350 cache.go:56] Caching tarball of preloaded images
	I1216 12:42:50.537414    7350 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:42:50.537419    7350 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:42:50.537476    7350 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/flannel-838000/config.json ...
	I1216 12:42:50.537488    7350 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/flannel-838000/config.json: {Name:mkc741f01f17133bbfcc2a9c68a8a7977dd581f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:42:50.537960    7350 start.go:360] acquireMachinesLock for flannel-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:50.538007    7350 start.go:364] duration metric: took 41.25µs to acquireMachinesLock for "flannel-838000"
	I1216 12:42:50.538020    7350 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:50.538045    7350 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:50.542380    7350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:50.559151    7350 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I1216 12:42:50.559193    7350 client.go:168] LocalClient.Create starting
	I1216 12:42:50.559268    7350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:50.559307    7350 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:50.559320    7350 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:50.559364    7350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:50.559393    7350 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:50.559402    7350 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:50.559822    7350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:50.730598    7350 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:50.922315    7350 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:50.922326    7350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:50.922590    7350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:50.933043    7350 main.go:141] libmachine: STDOUT: 
	I1216 12:42:50.933066    7350 main.go:141] libmachine: STDERR: 
	I1216 12:42:50.933132    7350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I1216 12:42:50.941755    7350 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:50.941771    7350 main.go:141] libmachine: STDERR: 
	I1216 12:42:50.941796    7350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:50.941800    7350 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:50.941813    7350 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:50.941866    7350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:bf:ce:32:9c:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:50.943706    7350 main.go:141] libmachine: STDOUT: 
	I1216 12:42:50.943720    7350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:50.943741    7350 client.go:171] duration metric: took 384.541917ms to LocalClient.Create
	I1216 12:42:52.945944    7350 start.go:128] duration metric: took 2.407871333s to createHost
	I1216 12:42:52.946023    7350 start.go:83] releasing machines lock for "flannel-838000", held for 2.4080055s
	W1216 12:42:52.946097    7350 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:52.963647    7350 out.go:177] * Deleting "flannel-838000" in qemu2 ...
	W1216 12:42:52.989950    7350 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:42:52.989981    7350 start.go:729] Will try again in 5 seconds ...
	I1216 12:42:57.992195    7350 start.go:360] acquireMachinesLock for flannel-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:42:57.992700    7350 start.go:364] duration metric: took 438µs to acquireMachinesLock for "flannel-838000"
	I1216 12:42:57.992792    7350 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:42:57.992993    7350 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:42:58.004438    7350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:42:58.050726    7350 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I1216 12:42:58.050780    7350 client.go:168] LocalClient.Create starting
	I1216 12:42:58.050966    7350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:42:58.051059    7350 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:58.051076    7350 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:58.051151    7350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:42:58.051211    7350 main.go:141] libmachine: Decoding PEM data...
	I1216 12:42:58.051231    7350 main.go:141] libmachine: Parsing certificate...
	I1216 12:42:58.052181    7350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:42:58.232592    7350 main.go:141] libmachine: Creating SSH key...
	I1216 12:42:58.313117    7350 main.go:141] libmachine: Creating Disk image...
	I1216 12:42:58.313129    7350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:42:58.313395    7350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:58.323747    7350 main.go:141] libmachine: STDOUT: 
	I1216 12:42:58.323791    7350 main.go:141] libmachine: STDERR: 
	I1216 12:42:58.323852    7350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I1216 12:42:58.332884    7350 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:42:58.332916    7350 main.go:141] libmachine: STDERR: 
	I1216 12:42:58.332934    7350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:58.332941    7350 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:42:58.332947    7350 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:42:58.332982    7350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:3a:8e:a3:08:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/flannel-838000/disk.qcow2
	I1216 12:42:58.334908    7350 main.go:141] libmachine: STDOUT: 
	I1216 12:42:58.334927    7350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:42:58.334941    7350 client.go:171] duration metric: took 284.154041ms to LocalClient.Create
	I1216 12:43:00.337149    7350 start.go:128] duration metric: took 2.34411775s to createHost
	I1216 12:43:00.337246    7350 start.go:83] releasing machines lock for "flannel-838000", held for 2.344528875s
	W1216 12:43:00.337650    7350 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:00.347295    7350 out.go:201] 
	W1216 12:43:00.355395    7350 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:00.355442    7350 out.go:270] * 
	* 
	W1216 12:43:00.358079    7350 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:00.367261    7350 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.954829791s)

                                                
                                                
-- stdout --
	* [bridge-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-838000" primary control-plane node in "bridge-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:02.978460    7468 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:02.978620    7468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:02.978623    7468 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:02.978626    7468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:02.978759    7468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:02.979971    7468 out.go:352] Setting JSON to false
	I1216 12:43:02.998103    7468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4353,"bootTime":1734377429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:02.998170    7468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:03.003906    7468 out.go:177] * [bridge-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:03.012798    7468 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:03.012834    7468 notify.go:220] Checking for updates...
	I1216 12:43:03.021746    7468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:03.024802    7468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:03.027691    7468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:03.030746    7468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:03.033820    7468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:03.037152    7468 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:03.037225    7468 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:43:03.037273    7468 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:03.041754    7468 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:43:03.047748    7468 start.go:297] selected driver: qemu2
	I1216 12:43:03.047758    7468 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:43:03.047766    7468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:03.050331    7468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:43:03.053797    7468 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:43:03.056909    7468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:03.056930    7468 cni.go:84] Creating CNI manager for "bridge"
	I1216 12:43:03.056933    7468 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:43:03.056968    7468 start.go:340] cluster config:
	{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:03.061739    7468 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:03.069721    7468 out.go:177] * Starting "bridge-838000" primary control-plane node in "bridge-838000" cluster
	I1216 12:43:03.073786    7468 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:43:03.073804    7468 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:43:03.073817    7468 cache.go:56] Caching tarball of preloaded images
	I1216 12:43:03.073912    7468 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:43:03.073918    7468 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:43:03.073998    7468 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/bridge-838000/config.json ...
	I1216 12:43:03.074010    7468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/bridge-838000/config.json: {Name:mk8ea71adfa64f3316204789b49c5e4c79b0ff04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:43:03.074491    7468 start.go:360] acquireMachinesLock for bridge-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:03.074542    7468 start.go:364] duration metric: took 45.291µs to acquireMachinesLock for "bridge-838000"
	I1216 12:43:03.074556    7468 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:03.074586    7468 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:03.078812    7468 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:43:03.096165    7468 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I1216 12:43:03.096194    7468 client.go:168] LocalClient.Create starting
	I1216 12:43:03.096267    7468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:03.096305    7468 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:03.096317    7468 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:03.096352    7468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:03.096383    7468 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:03.096391    7468 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:03.096771    7468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:03.266227    7468 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:03.498200    7468 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:03.498215    7468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:03.498490    7468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:03.509137    7468 main.go:141] libmachine: STDOUT: 
	I1216 12:43:03.509159    7468 main.go:141] libmachine: STDERR: 
	I1216 12:43:03.509226    7468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I1216 12:43:03.518160    7468 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:03.518177    7468 main.go:141] libmachine: STDERR: 
	I1216 12:43:03.518198    7468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:03.518206    7468 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:03.518219    7468 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:03.518250    7468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fc:0b:52:30:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:03.520112    7468 main.go:141] libmachine: STDOUT: 
	I1216 12:43:03.520125    7468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:03.520145    7468 client.go:171] duration metric: took 423.94575ms to LocalClient.Create
	I1216 12:43:05.522318    7468 start.go:128] duration metric: took 2.447707959s to createHost
	I1216 12:43:05.522420    7468 start.go:83] releasing machines lock for "bridge-838000", held for 2.44786975s
	W1216 12:43:05.522499    7468 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:05.533351    7468 out.go:177] * Deleting "bridge-838000" in qemu2 ...
	W1216 12:43:05.565840    7468 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:05.565865    7468 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:10.568018    7468 start.go:360] acquireMachinesLock for bridge-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:10.568580    7468 start.go:364] duration metric: took 473.75µs to acquireMachinesLock for "bridge-838000"
	I1216 12:43:10.568640    7468 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.32.0 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:10.568910    7468 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:10.580580    7468 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:43:10.616167    7468 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I1216 12:43:10.616221    7468 client.go:168] LocalClient.Create starting
	I1216 12:43:10.616344    7468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:10.616407    7468 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:10.616422    7468 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:10.616475    7468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:10.616522    7468 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:10.616538    7468 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:10.617346    7468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:10.791223    7468 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:10.826828    7468 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:10.826833    7468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:10.827054    7468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:10.837557    7468 main.go:141] libmachine: STDOUT: 
	I1216 12:43:10.837581    7468 main.go:141] libmachine: STDERR: 
	I1216 12:43:10.837670    7468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I1216 12:43:10.846619    7468 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:10.846643    7468 main.go:141] libmachine: STDERR: 
	I1216 12:43:10.846656    7468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:10.846661    7468 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:10.846670    7468 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:10.846699    7468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:86:8e:60:56:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/bridge-838000/disk.qcow2
	I1216 12:43:10.848599    7468 main.go:141] libmachine: STDOUT: 
	I1216 12:43:10.848612    7468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:10.848625    7468 client.go:171] duration metric: took 232.398208ms to LocalClient.Create
	I1216 12:43:12.850727    7468 start.go:128] duration metric: took 2.281772708s to createHost
	I1216 12:43:12.850769    7468 start.go:83] releasing machines lock for "bridge-838000", held for 2.28217175s
	W1216 12:43:12.850916    7468 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:12.870188    7468 out.go:201] 
	W1216 12:43:12.874270    7468 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:12.874278    7468 out.go:270] * 
	* 
	W1216 12:43:12.875102    7468 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:12.889186    7468 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.986677625s)

                                                
                                                
-- stdout --
	* [kubenet-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-838000" primary control-plane node in "kubenet-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:15.310443    7579 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:15.310619    7579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:15.310622    7579 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:15.310625    7579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:15.310745    7579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:15.311803    7579 out.go:352] Setting JSON to false
	I1216 12:43:15.330758    7579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4366,"bootTime":1734377429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:15.330838    7579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:15.342693    7579 out.go:177] * [kubenet-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:15.348810    7579 notify.go:220] Checking for updates...
	I1216 12:43:15.351654    7579 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:15.358753    7579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:15.366684    7579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:15.373730    7579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:15.376707    7579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:15.380547    7579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:15.385027    7579 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:15.385083    7579 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:43:15.385129    7579 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:15.388711    7579 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:43:15.395709    7579 start.go:297] selected driver: qemu2
	I1216 12:43:15.395716    7579 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:43:15.395725    7579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:15.398378    7579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:43:15.401732    7579 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:43:15.403308    7579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:15.403326    7579 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 12:43:15.403367    7579 start.go:340] cluster config:
	{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:15.407767    7579 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:15.415716    7579 out.go:177] * Starting "kubenet-838000" primary control-plane node in "kubenet-838000" cluster
	I1216 12:43:15.419666    7579 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:43:15.419691    7579 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:43:15.419706    7579 cache.go:56] Caching tarball of preloaded images
	I1216 12:43:15.419811    7579 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:43:15.419818    7579 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:43:15.419876    7579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kubenet-838000/config.json ...
	I1216 12:43:15.419887    7579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/kubenet-838000/config.json: {Name:mk5b78a9d67f6fda4604013052ddc5d2fc5af460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:43:15.420258    7579 start.go:360] acquireMachinesLock for kubenet-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:15.420308    7579 start.go:364] duration metric: took 43.584µs to acquireMachinesLock for "kubenet-838000"
	I1216 12:43:15.420320    7579 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:15.420351    7579 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:15.424537    7579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:43:15.440543    7579 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I1216 12:43:15.440572    7579 client.go:168] LocalClient.Create starting
	I1216 12:43:15.440648    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:15.440688    7579 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:15.440702    7579 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:15.440738    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:15.440771    7579 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:15.440782    7579 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:15.441275    7579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:15.759757    7579 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:15.829206    7579 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:15.829214    7579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:15.829429    7579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:15.840160    7579 main.go:141] libmachine: STDOUT: 
	I1216 12:43:15.840184    7579 main.go:141] libmachine: STDERR: 
	I1216 12:43:15.840266    7579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I1216 12:43:15.853492    7579 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:15.853514    7579 main.go:141] libmachine: STDERR: 
	I1216 12:43:15.853532    7579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:15.853537    7579 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:15.853548    7579 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:15.853586    7579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ed:d1:b5:1f:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:15.855763    7579 main.go:141] libmachine: STDOUT: 
	I1216 12:43:15.855778    7579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:15.855798    7579 client.go:171] duration metric: took 415.219125ms to LocalClient.Create
	I1216 12:43:17.857911    7579 start.go:128] duration metric: took 2.437522792s to createHost
	I1216 12:43:17.857949    7579 start.go:83] releasing machines lock for "kubenet-838000", held for 2.4376345s
	W1216 12:43:17.858003    7579 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:17.868190    7579 out.go:177] * Deleting "kubenet-838000" in qemu2 ...
	W1216 12:43:17.893572    7579 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:17.893584    7579 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:22.895722    7579 start.go:360] acquireMachinesLock for kubenet-838000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:22.895923    7579 start.go:364] duration metric: took 167.167µs to acquireMachinesLock for "kubenet-838000"
	I1216 12:43:22.895984    7579 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:22.896051    7579 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:22.908933    7579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 12:43:22.932211    7579 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I1216 12:43:22.932243    7579 client.go:168] LocalClient.Create starting
	I1216 12:43:22.932353    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:22.932406    7579 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:22.932419    7579 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:22.932467    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:22.932505    7579 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:22.932520    7579 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:22.933106    7579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:23.103105    7579 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:23.187433    7579 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:23.187442    7579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:23.187681    7579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:23.197676    7579 main.go:141] libmachine: STDOUT: 
	I1216 12:43:23.197707    7579 main.go:141] libmachine: STDERR: 
	I1216 12:43:23.197767    7579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I1216 12:43:23.206330    7579 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:23.206345    7579 main.go:141] libmachine: STDERR: 
	I1216 12:43:23.206357    7579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:23.206363    7579 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:23.206373    7579 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:23.206408    7579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6c:0e:4d:8f:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/kubenet-838000/disk.qcow2
	I1216 12:43:23.208236    7579 main.go:141] libmachine: STDOUT: 
	I1216 12:43:23.208250    7579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:23.208263    7579 client.go:171] duration metric: took 276.008833ms to LocalClient.Create
	I1216 12:43:25.210459    7579 start.go:128] duration metric: took 2.314378417s to createHost
	I1216 12:43:25.210532    7579 start.go:83] releasing machines lock for "kubenet-838000", held for 2.314596875s
	W1216 12:43:25.210984    7579 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:25.224699    7579 out.go:201] 
	W1216 12:43:25.227836    7579 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:25.227864    7579 out.go:270] * 
	* 
	W1216 12:43:25.230439    7579 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:25.246610    7579 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E1216 12:43:28.331453    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.964035792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-221000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-221000" primary control-plane node in "old-k8s-version-221000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-221000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:27.639710    7696 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:27.639860    7696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:27.639864    7696 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:27.639866    7696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:27.639987    7696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:27.641204    7696 out.go:352] Setting JSON to false
	I1216 12:43:27.659576    7696 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4378,"bootTime":1734377429,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:27.659661    7696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:27.665375    7696 out.go:177] * [old-k8s-version-221000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:27.674253    7696 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:27.674306    7696 notify.go:220] Checking for updates...
	I1216 12:43:27.682184    7696 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:27.685167    7696 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:27.688177    7696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:27.691152    7696 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:27.694219    7696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:27.697561    7696 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:27.697643    7696 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:43:27.697690    7696 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:27.701159    7696 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:43:27.707187    7696 start.go:297] selected driver: qemu2
	I1216 12:43:27.707193    7696 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:43:27.707207    7696 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:27.709675    7696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:43:27.714212    7696 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:43:27.717253    7696 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:27.717267    7696 cni.go:84] Creating CNI manager for ""
	I1216 12:43:27.717297    7696 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 12:43:27.717320    7696 start.go:340] cluster config:
	{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:27.722039    7696 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:27.731306    7696 out.go:177] * Starting "old-k8s-version-221000" primary control-plane node in "old-k8s-version-221000" cluster
	I1216 12:43:27.735067    7696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 12:43:27.735082    7696 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 12:43:27.735092    7696 cache.go:56] Caching tarball of preloaded images
	I1216 12:43:27.735176    7696 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:43:27.735182    7696 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 12:43:27.735238    7696 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/old-k8s-version-221000/config.json ...
	I1216 12:43:27.735255    7696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/old-k8s-version-221000/config.json: {Name:mke407e8895b5fc1e30ef3442fa1df9c3173faf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:43:27.735730    7696 start.go:360] acquireMachinesLock for old-k8s-version-221000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:27.735779    7696 start.go:364] duration metric: took 41.875µs to acquireMachinesLock for "old-k8s-version-221000"
	I1216 12:43:27.735793    7696 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:27.735817    7696 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:27.743067    7696 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:27.759573    7696 start.go:159] libmachine.API.Create for "old-k8s-version-221000" (driver="qemu2")
	I1216 12:43:27.759602    7696 client.go:168] LocalClient.Create starting
	I1216 12:43:27.759693    7696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:27.759730    7696 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:27.759742    7696 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:27.759783    7696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:27.759812    7696 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:27.759818    7696 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:27.760297    7696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:27.926387    7696 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:28.022884    7696 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:28.022889    7696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:28.023141    7696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:28.033199    7696 main.go:141] libmachine: STDOUT: 
	I1216 12:43:28.033228    7696 main.go:141] libmachine: STDERR: 
	I1216 12:43:28.033285    7696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2 +20000M
	I1216 12:43:28.042996    7696 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:28.043019    7696 main.go:141] libmachine: STDERR: 
	I1216 12:43:28.043051    7696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:28.043055    7696 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:28.043065    7696 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:28.043099    7696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:25:c1:09:e3:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:28.045402    7696 main.go:141] libmachine: STDOUT: 
	I1216 12:43:28.045478    7696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:28.045499    7696 client.go:171] duration metric: took 285.890166ms to LocalClient.Create
	I1216 12:43:30.047728    7696 start.go:128] duration metric: took 2.311875208s to createHost
	I1216 12:43:30.047826    7696 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 2.312035875s
	W1216 12:43:30.047912    7696 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:30.066340    7696 out.go:177] * Deleting "old-k8s-version-221000" in qemu2 ...
	W1216 12:43:30.094925    7696 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:30.094964    7696 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:35.097173    7696 start.go:360] acquireMachinesLock for old-k8s-version-221000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:35.097784    7696 start.go:364] duration metric: took 514.958µs to acquireMachinesLock for "old-k8s-version-221000"
	I1216 12:43:35.097931    7696 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:35.098243    7696 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:35.106850    7696 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:35.154332    7696 start.go:159] libmachine.API.Create for "old-k8s-version-221000" (driver="qemu2")
	I1216 12:43:35.154391    7696 client.go:168] LocalClient.Create starting
	I1216 12:43:35.154582    7696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:35.154682    7696 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:35.154697    7696 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:35.154770    7696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:35.154828    7696 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:35.154839    7696 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:35.155717    7696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:35.331971    7696 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:35.499016    7696 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:35.499027    7696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:35.499288    7696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:35.509740    7696 main.go:141] libmachine: STDOUT: 
	I1216 12:43:35.509757    7696 main.go:141] libmachine: STDERR: 
	I1216 12:43:35.509818    7696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2 +20000M
	I1216 12:43:35.518515    7696 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:35.518534    7696 main.go:141] libmachine: STDERR: 
	I1216 12:43:35.518551    7696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:35.518556    7696 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:35.518564    7696 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:35.518603    7696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:73:38:0c:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:35.520465    7696 main.go:141] libmachine: STDOUT: 
	I1216 12:43:35.520477    7696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:35.520489    7696 client.go:171] duration metric: took 366.093292ms to LocalClient.Create
	I1216 12:43:37.522655    7696 start.go:128] duration metric: took 2.424378167s to createHost
	I1216 12:43:37.522719    7696 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 2.4249125s
	W1216 12:43:37.523108    7696 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:37.531957    7696 out.go:201] 
	W1216 12:43:37.543087    7696 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:37.543117    7696 out.go:270] * 
	* 
	W1216 12:43:37.545914    7696 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:37.557975    7696 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (69.00925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml: exit status 1 (29.482042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-221000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (33.609792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (33.199542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-221000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system: exit status 1 (27.815166ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-221000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (34.212416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.194947958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-221000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-221000" primary control-plane node in "old-k8s-version-221000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-221000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-221000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:41.505206    7752 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:41.505395    7752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:41.505398    7752 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:41.505401    7752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:41.505527    7752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:41.506626    7752 out.go:352] Setting JSON to false
	I1216 12:43:41.524771    7752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4392,"bootTime":1734377429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:41.524844    7752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:41.529515    7752 out.go:177] * [old-k8s-version-221000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:41.536490    7752 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:41.536573    7752 notify.go:220] Checking for updates...
	I1216 12:43:41.544387    7752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:41.547443    7752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:41.550497    7752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:41.553482    7752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:41.556449    7752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:41.559731    7752 config.go:182] Loaded profile config "old-k8s-version-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 12:43:41.562366    7752 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 12:43:41.565441    7752 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:41.569473    7752 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:43:41.576492    7752 start.go:297] selected driver: qemu2
	I1216 12:43:41.576500    7752 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:41.576561    7752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:41.578941    7752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:41.578967    7752 cni.go:84] Creating CNI manager for ""
	I1216 12:43:41.578985    7752 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 12:43:41.579026    7752 start.go:340] cluster config:
	{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-221000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:41.583176    7752 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:41.591400    7752 out.go:177] * Starting "old-k8s-version-221000" primary control-plane node in "old-k8s-version-221000" cluster
	I1216 12:43:41.594453    7752 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 12:43:41.594467    7752 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 12:43:41.594478    7752 cache.go:56] Caching tarball of preloaded images
	I1216 12:43:41.594552    7752 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:43:41.594557    7752 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 12:43:41.594611    7752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/old-k8s-version-221000/config.json ...
	I1216 12:43:41.595131    7752 start.go:360] acquireMachinesLock for old-k8s-version-221000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:41.595158    7752 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "old-k8s-version-221000"
	I1216 12:43:41.595166    7752 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:43:41.595171    7752 fix.go:54] fixHost starting: 
	I1216 12:43:41.595273    7752 fix.go:112] recreateIfNeeded on old-k8s-version-221000: state=Stopped err=<nil>
	W1216 12:43:41.595279    7752 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:43:41.599453    7752 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-221000" ...
	I1216 12:43:41.606391    7752 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:41.606426    7752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:73:38:0c:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:41.608424    7752 main.go:141] libmachine: STDOUT: 
	I1216 12:43:41.608441    7752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:41.608469    7752 fix.go:56] duration metric: took 13.297084ms for fixHost
	I1216 12:43:41.608472    7752 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 13.310875ms
	W1216 12:43:41.608476    7752 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:41.608510    7752 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:41.608513    7752 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:46.610805    7752 start.go:360] acquireMachinesLock for old-k8s-version-221000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:46.611321    7752 start.go:364] duration metric: took 387.333µs to acquireMachinesLock for "old-k8s-version-221000"
	I1216 12:43:46.611395    7752 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:43:46.611415    7752 fix.go:54] fixHost starting: 
	I1216 12:43:46.612154    7752 fix.go:112] recreateIfNeeded on old-k8s-version-221000: state=Stopped err=<nil>
	W1216 12:43:46.612180    7752 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:43:46.619592    7752 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-221000" ...
	I1216 12:43:46.623583    7752 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:46.623961    7752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:73:38:0c:34:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/old-k8s-version-221000/disk.qcow2
	I1216 12:43:46.634610    7752 main.go:141] libmachine: STDOUT: 
	I1216 12:43:46.634666    7752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:46.634730    7752 fix.go:56] duration metric: took 23.317583ms for fixHost
	I1216 12:43:46.634746    7752 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 23.402625ms
	W1216 12:43:46.634902    7752 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-221000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-221000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:46.642544    7752 out.go:201] 
	W1216 12:43:46.646663    7752 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:46.646686    7752 out.go:270] * 
	* 
	W1216 12:43:46.649322    7752 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:46.656545    7752 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (61.961334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-221000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (35.560333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-221000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.483417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-221000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (34.188416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-221000 image list --format=json
start_stop_delete_test.go:302: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (33.623625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-221000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-221000 --alsologtostderr -v=1: exit status 83 (46.8325ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-221000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-221000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:46.938387    7771 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:46.939380    7771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:46.939387    7771 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:46.939390    7771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:46.939514    7771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:46.939733    7771 out.go:352] Setting JSON to false
	I1216 12:43:46.939740    7771 mustload.go:65] Loading cluster: old-k8s-version-221000
	I1216 12:43:46.939943    7771 config.go:182] Loaded profile config "old-k8s-version-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1216 12:43:46.944726    7771 out.go:177] * The control-plane node old-k8s-version-221000 host is not running: state=Stopped
	I1216 12:43:46.947674    7771 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-221000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p old-k8s-version-221000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (34.520416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (33.9955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (9.953858042s)

                                                
                                                
-- stdout --
	* [no-preload-456000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-456000" primary control-plane node in "no-preload-456000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-456000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:47.288304    7788 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:47.288468    7788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:47.288472    7788 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:47.288474    7788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:47.288597    7788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:47.289676    7788 out.go:352] Setting JSON to false
	I1216 12:43:47.307794    7788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4398,"bootTime":1734377429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:47.307897    7788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:47.311573    7788 out.go:177] * [no-preload-456000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:47.317497    7788 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:47.317567    7788 notify.go:220] Checking for updates...
	I1216 12:43:47.325440    7788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:47.328479    7788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:47.331508    7788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:47.334404    7788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:47.337655    7788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:47.340814    7788 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:47.340876    7788 config.go:182] Loaded profile config "stopped-upgrade-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1216 12:43:47.340921    7788 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:47.345442    7788 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:43:47.352474    7788 start.go:297] selected driver: qemu2
	I1216 12:43:47.352479    7788 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:43:47.352486    7788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:47.354881    7788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:43:47.358465    7788 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:43:47.361495    7788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:47.361509    7788 cni.go:84] Creating CNI manager for ""
	I1216 12:43:47.361527    7788 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:43:47.361533    7788 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:43:47.361560    7788 start.go:340] cluster config:
	{Name:no-preload-456000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-456000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:47.366007    7788 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.370314    7788 out.go:177] * Starting "no-preload-456000" primary control-plane node in "no-preload-456000" cluster
	I1216 12:43:47.374449    7788 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:43:47.374516    7788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/no-preload-456000/config.json ...
	I1216 12:43:47.374534    7788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/no-preload-456000/config.json: {Name:mkf9c444ffbe3e05fede68751319e3391aaf5ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:43:47.374540    7788 cache.go:107] acquiring lock: {Name:mkde417adcf32f4dddf4d4cbb2289c4a3d9e49f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374570    7788 cache.go:107] acquiring lock: {Name:mk74ff6bf3ec5c9f09cf19d5873ecb014a6d41c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374611    7788 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 12:43:47.374612    7788 cache.go:107] acquiring lock: {Name:mk77fa19b6c4a788df74d1631f55e89f4550efc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374623    7788 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.25µs
	I1216 12:43:47.374629    7788 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 12:43:47.374635    7788 cache.go:107] acquiring lock: {Name:mk957fc336e0bc6d4f0dbfbde4e46e5b1bc50ef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374684    7788 cache.go:107] acquiring lock: {Name:mk0f357e6bac9583a51a891a72e3bc81894148d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374743    7788 cache.go:107] acquiring lock: {Name:mk8ab1f470305f1be3bdf05b2229d9cfc156004b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374749    7788 cache.go:107] acquiring lock: {Name:mk190b6b45142df2e1c20ce883715602a142245b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374799    7788 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 12:43:47.374829    7788 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 12:43:47.374849    7788 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 12:43:47.374867    7788 start.go:360] acquireMachinesLock for no-preload-456000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:47.374891    7788 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 12:43:47.374909    7788 cache.go:107] acquiring lock: {Name:mk6bed04b85fc3d16a0a2ac22f1d84039943f099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:47.374915    7788 start.go:364] duration metric: took 42.625µs to acquireMachinesLock for "no-preload-456000"
	I1216 12:43:47.374928    7788 start.go:93] Provisioning new machine with config: &{Name:no-preload-456000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:no-preload-456000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:47.374975    7788 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:47.375046    7788 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 12:43:47.375066    7788 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 12:43:47.375170    7788 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 12:43:47.382476    7788 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:47.386979    7788 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 12:43:47.387061    7788 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 12:43:47.387118    7788 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 12:43:47.389445    7788 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 12:43:47.389480    7788 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 12:43:47.389495    7788 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 12:43:47.389505    7788 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 12:43:47.397564    7788 start.go:159] libmachine.API.Create for "no-preload-456000" (driver="qemu2")
	I1216 12:43:47.397583    7788 client.go:168] LocalClient.Create starting
	I1216 12:43:47.397659    7788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:47.397696    7788 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:47.397708    7788 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:47.397742    7788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:47.397771    7788 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:47.397778    7788 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:47.398133    7788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:47.576967    7788 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:47.742821    7788 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:47.742842    7788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:47.743079    7788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:47.752861    7788 main.go:141] libmachine: STDOUT: 
	I1216 12:43:47.752881    7788 main.go:141] libmachine: STDERR: 
	I1216 12:43:47.752932    7788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2 +20000M
	I1216 12:43:47.762619    7788 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:47.762640    7788 main.go:141] libmachine: STDERR: 
	I1216 12:43:47.762655    7788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:47.762662    7788 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:47.762678    7788 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:47.762710    7788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:75:bd:c4:3c:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:47.764772    7788 main.go:141] libmachine: STDOUT: 
	I1216 12:43:47.764786    7788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:47.764807    7788 client.go:171] duration metric: took 367.219917ms to LocalClient.Create
	I1216 12:43:47.860615    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1216 12:43:47.882044    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0
	I1216 12:43:47.884079    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 12:43:47.925705    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 12:43:47.985309    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 12:43:48.015082    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1216 12:43:48.015095    7788 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 640.460625ms
	I1216 12:43:48.015105    7788 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1216 12:43:48.060266    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 12:43:48.147861    7788 cache.go:162] opening:  /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 12:43:49.765068    7788 start.go:128] duration metric: took 2.390068042s to createHost
	I1216 12:43:49.765132    7788 start.go:83] releasing machines lock for "no-preload-456000", held for 2.390207083s
	W1216 12:43:49.765196    7788 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:49.784441    7788 out.go:177] * Deleting "no-preload-456000" in qemu2 ...
	W1216 12:43:49.811064    7788 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:49.811086    7788 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:52.015357    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I1216 12:43:52.015413    7788 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0" took 4.640515375s
	I1216 12:43:52.015445    7788 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I1216 12:43:52.191387    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I1216 12:43:52.191432    7788 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0" took 4.816809375s
	I1216 12:43:52.191455    7788 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I1216 12:43:52.740364    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1216 12:43:52.740411    7788 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 5.365747875s
	I1216 12:43:52.740432    7788 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1216 12:43:52.912070    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I1216 12:43:52.912121    7788 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0" took 5.537369875s
	I1216 12:43:52.912146    7788 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I1216 12:43:53.134714    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I1216 12:43:53.134763    7788 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0" took 5.760192292s
	I1216 12:43:53.134788    7788 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I1216 12:43:54.812833    7788 start.go:360] acquireMachinesLock for no-preload-456000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:54.813280    7788 start.go:364] duration metric: took 366.958µs to acquireMachinesLock for "no-preload-456000"
	I1216 12:43:54.813393    7788 start.go:93] Provisioning new machine with config: &{Name:no-preload-456000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:no-preload-456000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:54.813620    7788 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:54.825362    7788 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:54.873902    7788 start.go:159] libmachine.API.Create for "no-preload-456000" (driver="qemu2")
	I1216 12:43:54.873940    7788 client.go:168] LocalClient.Create starting
	I1216 12:43:54.874090    7788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:54.874184    7788 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:54.874206    7788 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:54.874280    7788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:54.874345    7788 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:54.874362    7788 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:54.875025    7788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:55.061822    7788 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:55.137679    7788 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:55.137685    7788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:55.137914    7788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:55.147962    7788 main.go:141] libmachine: STDOUT: 
	I1216 12:43:55.147983    7788 main.go:141] libmachine: STDERR: 
	I1216 12:43:55.148056    7788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2 +20000M
	I1216 12:43:55.157125    7788 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:55.157148    7788 main.go:141] libmachine: STDERR: 
	I1216 12:43:55.157163    7788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:55.157170    7788 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:55.157182    7788 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:55.157222    7788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4a:21:4d:c5:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:55.159261    7788 main.go:141] libmachine: STDOUT: 
	I1216 12:43:55.159331    7788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:55.159346    7788 client.go:171] duration metric: took 285.400958ms to LocalClient.Create
	I1216 12:43:55.890058    7788 cache.go:157] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
	I1216 12:43:55.890133    7788 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 8.515538916s
	I1216 12:43:55.890211    7788 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I1216 12:43:55.890289    7788 cache.go:87] Successfully saved all images to host disk.
	I1216 12:43:57.161611    7788 start.go:128] duration metric: took 2.347960583s to createHost
	I1216 12:43:57.161687    7788 start.go:83] releasing machines lock for "no-preload-456000", held for 2.348380792s
	W1216 12:43:57.162148    7788 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-456000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-456000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:57.177402    7788 out.go:201] 
	W1216 12:43:57.181180    7788 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:57.181239    7788 out.go:270] * 
	* 
	W1216 12:43:57.183863    7788 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:57.196222    7788 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (74.739416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (11.410472041s)

                                                
                                                
-- stdout --
	* [embed-certs-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-355000" primary control-plane node in "embed-certs-355000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-355000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:48.379858    7829 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:48.380031    7829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:48.380034    7829 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:48.380036    7829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:48.380156    7829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:48.381254    7829 out.go:352] Setting JSON to false
	I1216 12:43:48.399048    7829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4399,"bootTime":1734377429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:48.399119    7829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:48.403344    7829 out.go:177] * [embed-certs-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:48.410322    7829 notify.go:220] Checking for updates...
	I1216 12:43:48.414086    7829 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:48.422267    7829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:48.425249    7829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:48.428303    7829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:48.431255    7829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:48.434232    7829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:48.438574    7829 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:48.438651    7829 config.go:182] Loaded profile config "no-preload-456000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:48.438697    7829 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:48.441308    7829 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:43:48.448257    7829 start.go:297] selected driver: qemu2
	I1216 12:43:48.448265    7829 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:43:48.448270    7829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:48.450773    7829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:43:48.455366    7829 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:43:48.458385    7829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:48.458407    7829 cni.go:84] Creating CNI manager for ""
	I1216 12:43:48.458435    7829 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:43:48.458450    7829 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:43:48.458488    7829 start.go:340] cluster config:
	{Name:embed-certs-355000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:48.463158    7829 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:48.470261    7829 out.go:177] * Starting "embed-certs-355000" primary control-plane node in "embed-certs-355000" cluster
	I1216 12:43:48.474273    7829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:43:48.474289    7829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:43:48.474300    7829 cache.go:56] Caching tarball of preloaded images
	I1216 12:43:48.474384    7829 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:43:48.474392    7829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:43:48.474448    7829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/embed-certs-355000/config.json ...
	I1216 12:43:48.474463    7829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/embed-certs-355000/config.json: {Name:mk4bafd4da4017636cfc85efa698dd41a83e3a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:43:48.474826    7829 start.go:360] acquireMachinesLock for embed-certs-355000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:49.765276    7829 start.go:364] duration metric: took 1.290426417s to acquireMachinesLock for "embed-certs-355000"
	I1216 12:43:49.765387    7829 start.go:93] Provisioning new machine with config: &{Name:embed-certs-355000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:embed-certs-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:49.765645    7829 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:49.775138    7829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:49.826026    7829 start.go:159] libmachine.API.Create for "embed-certs-355000" (driver="qemu2")
	I1216 12:43:49.826078    7829 client.go:168] LocalClient.Create starting
	I1216 12:43:49.826212    7829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:49.826293    7829 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:49.826315    7829 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:49.826378    7829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:49.826434    7829 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:49.826465    7829 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:49.827130    7829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:50.003997    7829 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:50.165614    7829 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:50.165621    7829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:50.165867    7829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:50.176177    7829 main.go:141] libmachine: STDOUT: 
	I1216 12:43:50.176206    7829 main.go:141] libmachine: STDERR: 
	I1216 12:43:50.176266    7829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2 +20000M
	I1216 12:43:50.184929    7829 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:50.184944    7829 main.go:141] libmachine: STDERR: 
	I1216 12:43:50.184958    7829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:50.184962    7829 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:50.184980    7829 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:50.185010    7829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:bf:2f:21:6f:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:50.186900    7829 main.go:141] libmachine: STDOUT: 
	I1216 12:43:50.186915    7829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:50.186935    7829 client.go:171] duration metric: took 360.848667ms to LocalClient.Create
	I1216 12:43:52.189174    7829 start.go:128] duration metric: took 2.423495042s to createHost
	I1216 12:43:52.189237    7829 start.go:83] releasing machines lock for "embed-certs-355000", held for 2.423921708s
	W1216 12:43:52.189283    7829 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:52.201688    7829 out.go:177] * Deleting "embed-certs-355000" in qemu2 ...
	W1216 12:43:52.245312    7829 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:52.245352    7829 start.go:729] Will try again in 5 seconds ...
	I1216 12:43:57.247496    7829 start.go:360] acquireMachinesLock for embed-certs-355000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:57.247679    7829 start.go:364] duration metric: took 147µs to acquireMachinesLock for "embed-certs-355000"
	I1216 12:43:57.247726    7829 start.go:93] Provisioning new machine with config: &{Name:embed-certs-355000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:embed-certs-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:43:57.247819    7829 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:43:57.254636    7829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:43:57.279255    7829 start.go:159] libmachine.API.Create for "embed-certs-355000" (driver="qemu2")
	I1216 12:43:57.279295    7829 client.go:168] LocalClient.Create starting
	I1216 12:43:57.279370    7829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:43:57.279408    7829 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:57.279425    7829 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:57.279468    7829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:43:57.279489    7829 main.go:141] libmachine: Decoding PEM data...
	I1216 12:43:57.279500    7829 main.go:141] libmachine: Parsing certificate...
	I1216 12:43:57.279916    7829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:43:57.587259    7829 main.go:141] libmachine: Creating SSH key...
	I1216 12:43:57.689069    7829 main.go:141] libmachine: Creating Disk image...
	I1216 12:43:57.689077    7829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:43:57.689276    7829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:57.699126    7829 main.go:141] libmachine: STDOUT: 
	I1216 12:43:57.699150    7829 main.go:141] libmachine: STDERR: 
	I1216 12:43:57.699209    7829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2 +20000M
	I1216 12:43:57.707872    7829 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:43:57.707889    7829 main.go:141] libmachine: STDERR: 
	I1216 12:43:57.707906    7829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:57.707910    7829 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:43:57.707919    7829 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:57.707955    7829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:34:cf:ab:8a:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:43:57.709792    7829 main.go:141] libmachine: STDOUT: 
	I1216 12:43:57.709807    7829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:57.709821    7829 client.go:171] duration metric: took 430.521875ms to LocalClient.Create
	I1216 12:43:59.711960    7829 start.go:128] duration metric: took 2.464132291s to createHost
	I1216 12:43:59.711979    7829 start.go:83] releasing machines lock for "embed-certs-355000", held for 2.464279583s
	W1216 12:43:59.712055    7829 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-355000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-355000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:59.725109    7829 out.go:201] 
	W1216 12:43:59.732166    7829 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:59.732174    7829 out.go:270] * 
	* 
	W1216 12:43:59.732820    7829 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:43:59.748182    7829 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (41.80875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-456000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-456000 create -f testdata/busybox.yaml: exit status 1 (41.6945ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-456000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-456000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (39.2705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (38.993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-456000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-456000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-456000 describe deploy/metrics-server -n kube-system: exit status 1 (36.322333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-456000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-456000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (35.609375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (5.231455375s)

                                                
                                                
-- stdout --
	* [no-preload-456000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-456000" primary control-plane node in "no-preload-456000" cluster
	* Restarting existing qemu2 VM for "no-preload-456000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-456000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:43:59.595608    7881 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:43:59.595796    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:59.595800    7881 out.go:358] Setting ErrFile to fd 2...
	I1216 12:43:59.595802    7881 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:43:59.595928    7881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:43:59.597112    7881 out.go:352] Setting JSON to false
	I1216 12:43:59.616616    7881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4410,"bootTime":1734377429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:43:59.616703    7881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:43:59.621218    7881 out.go:177] * [no-preload-456000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:43:59.629149    7881 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:43:59.629186    7881 notify.go:220] Checking for updates...
	I1216 12:43:59.636156    7881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:43:59.639092    7881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:43:59.643105    7881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:43:59.646200    7881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:43:59.649092    7881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:43:59.652397    7881 config.go:182] Loaded profile config "no-preload-456000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:43:59.652706    7881 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:43:59.656188    7881 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:43:59.663156    7881 start.go:297] selected driver: qemu2
	I1216 12:43:59.663163    7881 start.go:901] validating driver "qemu2" against &{Name:no-preload-456000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:no-preload-456000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:59.663216    7881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:43:59.666026    7881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:43:59.666050    7881 cni.go:84] Creating CNI manager for ""
	I1216 12:43:59.666073    7881 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:43:59.666093    7881 start.go:340] cluster config:
	{Name:no-preload-456000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-456000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:43:59.670909    7881 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.679135    7881 out.go:177] * Starting "no-preload-456000" primary control-plane node in "no-preload-456000" cluster
	I1216 12:43:59.683113    7881 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:43:59.683173    7881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/no-preload-456000/config.json ...
	I1216 12:43:59.683197    7881 cache.go:107] acquiring lock: {Name:mkde417adcf32f4dddf4d4cbb2289c4a3d9e49f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683201    7881 cache.go:107] acquiring lock: {Name:mk0f357e6bac9583a51a891a72e3bc81894148d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683214    7881 cache.go:107] acquiring lock: {Name:mk8ab1f470305f1be3bdf05b2229d9cfc156004b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683273    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1216 12:43:59.683279    7881 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.625µs
	I1216 12:43:59.683289    7881 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1216 12:43:59.683200    7881 cache.go:107] acquiring lock: {Name:mk6bed04b85fc3d16a0a2ac22f1d84039943f099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683291    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I1216 12:43:59.683298    7881 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0" took 107.5µs
	I1216 12:43:59.683303    7881 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I1216 12:43:59.683309    7881 cache.go:107] acquiring lock: {Name:mk74ff6bf3ec5c9f09cf19d5873ecb014a6d41c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683316    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I1216 12:43:59.683322    7881 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0" took 109.666µs
	I1216 12:43:59.683326    7881 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I1216 12:43:59.683336    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I1216 12:43:59.683334    7881 cache.go:107] acquiring lock: {Name:mk957fc336e0bc6d4f0dbfbde4e46e5b1bc50ef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683339    7881 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0" took 147.584µs
	I1216 12:43:59.683343    7881 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I1216 12:43:59.683304    7881 cache.go:107] acquiring lock: {Name:mk77fa19b6c4a788df74d1631f55e89f4550efc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683359    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I1216 12:43:59.683365    7881 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0" took 57.167µs
	I1216 12:43:59.683368    7881 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I1216 12:43:59.683378    7881 cache.go:107] acquiring lock: {Name:mk190b6b45142df2e1c20ce883715602a142245b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:43:59.683389    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1216 12:43:59.683395    7881 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 60.375µs
	I1216 12:43:59.683399    7881 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1216 12:43:59.683423    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
	I1216 12:43:59.683428    7881 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 147.167µs
	I1216 12:43:59.683431    7881 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I1216 12:43:59.683440    7881 cache.go:115] /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1216 12:43:59.683447    7881 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 114.209µs
	I1216 12:43:59.683450    7881 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1216 12:43:59.683455    7881 cache.go:87] Successfully saved all images to host disk.
	I1216 12:43:59.683648    7881 start.go:360] acquireMachinesLock for no-preload-456000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:43:59.712009    7881 start.go:364] duration metric: took 28.345375ms to acquireMachinesLock for "no-preload-456000"
	I1216 12:43:59.712020    7881 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:43:59.712026    7881 fix.go:54] fixHost starting: 
	I1216 12:43:59.712177    7881 fix.go:112] recreateIfNeeded on no-preload-456000: state=Stopped err=<nil>
	W1216 12:43:59.712188    7881 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:43:59.725101    7881 out.go:177] * Restarting existing qemu2 VM for "no-preload-456000" ...
	I1216 12:43:59.729146    7881 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:43:59.729189    7881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4a:21:4d:c5:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:43:59.731938    7881 main.go:141] libmachine: STDOUT: 
	I1216 12:43:59.731961    7881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:43:59.731993    7881 fix.go:56] duration metric: took 19.965125ms for fixHost
	I1216 12:43:59.731999    7881 start.go:83] releasing machines lock for "no-preload-456000", held for 19.984416ms
	W1216 12:43:59.732006    7881 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:43:59.732057    7881 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:43:59.732063    7881 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:04.734305    7881 start.go:360] acquireMachinesLock for no-preload-456000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:04.734902    7881 start.go:364] duration metric: took 487.625µs to acquireMachinesLock for "no-preload-456000"
	I1216 12:44:04.735072    7881 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:04.735095    7881 fix.go:54] fixHost starting: 
	I1216 12:44:04.735928    7881 fix.go:112] recreateIfNeeded on no-preload-456000: state=Stopped err=<nil>
	W1216 12:44:04.735961    7881 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:04.740153    7881 out.go:177] * Restarting existing qemu2 VM for "no-preload-456000" ...
	I1216 12:44:04.747194    7881 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:04.747506    7881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4a:21:4d:c5:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/no-preload-456000/disk.qcow2
	I1216 12:44:04.758185    7881 main.go:141] libmachine: STDOUT: 
	I1216 12:44:04.758261    7881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:04.758347    7881 fix.go:56] duration metric: took 23.251959ms for fixHost
	I1216 12:44:04.758370    7881 start.go:83] releasing machines lock for "no-preload-456000", held for 23.401833ms
	W1216 12:44:04.758575    7881 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-456000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-456000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:04.767093    7881 out.go:201] 
	W1216 12:44:04.772279    7881 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:04.772304    7881 out.go:270] * 
	* 
	W1216 12:44:04.774897    7881 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:04.781178    7881 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-456000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (72.272541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-355000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context embed-certs-355000 create -f testdata/busybox.yaml: exit status 1 (27.432208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-355000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context embed-certs-355000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (34.234083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (33.702542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-355000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-355000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-355000 describe deploy/metrics-server -n kube-system: exit status 1 (27.6095ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-355000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-355000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (33.727333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (6.097657875s)

                                                
                                                
-- stdout --
	* [embed-certs-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-355000" primary control-plane node in "embed-certs-355000" cluster
	* Restarting existing qemu2 VM for "embed-certs-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-355000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:02.054875    7914 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:02.055052    7914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:02.055055    7914 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:02.055058    7914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:02.055177    7914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:02.056300    7914 out.go:352] Setting JSON to false
	I1216 12:44:02.075022    7914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4413,"bootTime":1734377429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:44:02.075103    7914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:44:02.080241    7914 out.go:177] * [embed-certs-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:44:02.088241    7914 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:44:02.088308    7914 notify.go:220] Checking for updates...
	I1216 12:44:02.096132    7914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:44:02.099201    7914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:44:02.100957    7914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:44:02.104130    7914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:44:02.107217    7914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:44:02.110858    7914 config.go:182] Loaded profile config "embed-certs-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:02.111118    7914 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:44:02.114203    7914 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:44:02.120936    7914 start.go:297] selected driver: qemu2
	I1216 12:44:02.120942    7914 start.go:901] validating driver "qemu2" against &{Name:embed-certs-355000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:embed-certs-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:02.120985    7914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:44:02.123486    7914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:44:02.123512    7914 cni.go:84] Creating CNI manager for ""
	I1216 12:44:02.123532    7914 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:44:02.123563    7914 start.go:340] cluster config:
	{Name:embed-certs-355000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-355000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:02.128173    7914 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:44:02.136157    7914 out.go:177] * Starting "embed-certs-355000" primary control-plane node in "embed-certs-355000" cluster
	I1216 12:44:02.140115    7914 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:44:02.140128    7914 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:44:02.140137    7914 cache.go:56] Caching tarball of preloaded images
	I1216 12:44:02.140210    7914 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:44:02.140215    7914 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:44:02.140261    7914 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/embed-certs-355000/config.json ...
	I1216 12:44:02.140814    7914 start.go:360] acquireMachinesLock for embed-certs-355000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:02.140849    7914 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "embed-certs-355000"
	I1216 12:44:02.140859    7914 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:02.140865    7914 fix.go:54] fixHost starting: 
	I1216 12:44:02.140991    7914 fix.go:112] recreateIfNeeded on embed-certs-355000: state=Stopped err=<nil>
	W1216 12:44:02.141000    7914 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:02.145175    7914 out.go:177] * Restarting existing qemu2 VM for "embed-certs-355000" ...
	I1216 12:44:02.153138    7914 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:02.153172    7914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:34:cf:ab:8a:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:44:02.155328    7914 main.go:141] libmachine: STDOUT: 
	I1216 12:44:02.155347    7914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:02.155379    7914 fix.go:56] duration metric: took 14.512917ms for fixHost
	I1216 12:44:02.155385    7914 start.go:83] releasing machines lock for "embed-certs-355000", held for 14.53075ms
	W1216 12:44:02.155390    7914 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:02.155435    7914 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:02.155439    7914 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:07.157621    7914 start.go:360] acquireMachinesLock for embed-certs-355000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:08.032774    7914 start.go:364] duration metric: took 874.999291ms to acquireMachinesLock for "embed-certs-355000"
	I1216 12:44:08.032889    7914 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:08.032907    7914 fix.go:54] fixHost starting: 
	I1216 12:44:08.033707    7914 fix.go:112] recreateIfNeeded on embed-certs-355000: state=Stopped err=<nil>
	W1216 12:44:08.033737    7914 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:08.042115    7914 out.go:177] * Restarting existing qemu2 VM for "embed-certs-355000" ...
	I1216 12:44:08.062147    7914 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:08.062436    7914 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:34:cf:ab:8a:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/embed-certs-355000/disk.qcow2
	I1216 12:44:08.074674    7914 main.go:141] libmachine: STDOUT: 
	I1216 12:44:08.074727    7914 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:08.074799    7914 fix.go:56] duration metric: took 41.890583ms for fixHost
	I1216 12:44:08.074822    7914 start.go:83] releasing machines lock for "embed-certs-355000", held for 41.986958ms
	W1216 12:44:08.075022    7914 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-355000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-355000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:08.083194    7914 out.go:201] 
	W1216 12:44:08.086399    7914 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:08.086421    7914 out.go:270] * 
	* 
	W1216 12:44:08.089190    7914 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:08.104202    7914 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-355000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (63.984417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-456000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (35.380708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-456000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-456000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-456000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.918ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-456000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-456000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (34.47475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-456000 image list --format=json
start_stop_delete_test.go:302: v1.32.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.0",
- 	"registry.k8s.io/kube-controller-manager:v1.32.0",
- 	"registry.k8s.io/kube-proxy:v1.32.0",
- 	"registry.k8s.io/kube-scheduler:v1.32.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (33.564959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-456000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-456000 --alsologtostderr -v=1: exit status 83 (42.971916ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-456000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-456000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:05.078989    7933 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:05.079527    7933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:05.079531    7933 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:05.079533    7933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:05.079702    7933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:05.079929    7933 out.go:352] Setting JSON to false
	I1216 12:44:05.079936    7933 mustload.go:65] Loading cluster: no-preload-456000
	I1216 12:44:05.080165    7933 config.go:182] Loaded profile config "no-preload-456000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:05.083250    7933 out.go:177] * The control-plane node no-preload-456000 host is not running: state=Stopped
	I1216 12:44:05.086145    7933 out.go:177]   To start a cluster, run: "minikube start -p no-preload-456000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p no-preload-456000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (33.149666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (33.961041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-456000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (9.951026375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-304000" primary control-plane node in "default-k8s-diff-port-304000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-304000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:05.550709    7957 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:05.550879    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:05.550882    7957 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:05.550885    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:05.550997    7957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:05.552144    7957 out.go:352] Setting JSON to false
	I1216 12:44:05.570955    7957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4416,"bootTime":1734377429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:44:05.571044    7957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:44:05.575139    7957 out.go:177] * [default-k8s-diff-port-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:44:05.582101    7957 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:44:05.582160    7957 notify.go:220] Checking for updates...
	I1216 12:44:05.590207    7957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:44:05.593134    7957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:44:05.596149    7957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:44:05.599197    7957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:44:05.602073    7957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:44:05.605444    7957 config.go:182] Loaded profile config "embed-certs-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:05.605503    7957 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:05.605555    7957 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:44:05.610139    7957 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:44:05.617089    7957 start.go:297] selected driver: qemu2
	I1216 12:44:05.617096    7957 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:44:05.617104    7957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:44:05.619616    7957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 12:44:05.623118    7957 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:44:05.626123    7957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:44:05.626143    7957 cni.go:84] Creating CNI manager for ""
	I1216 12:44:05.626164    7957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:44:05.626170    7957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:44:05.626200    7957 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:05.631025    7957 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:44:05.639015    7957 out.go:177] * Starting "default-k8s-diff-port-304000" primary control-plane node in "default-k8s-diff-port-304000" cluster
	I1216 12:44:05.643201    7957 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:44:05.643215    7957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:44:05.643223    7957 cache.go:56] Caching tarball of preloaded images
	I1216 12:44:05.643340    7957 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:44:05.643353    7957 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:44:05.643429    7957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/default-k8s-diff-port-304000/config.json ...
	I1216 12:44:05.643441    7957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/default-k8s-diff-port-304000/config.json: {Name:mk65e9662560c63b09ee9237d4a5685ff1c417b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:44:05.644379    7957 start.go:360] acquireMachinesLock for default-k8s-diff-port-304000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:05.644437    7957 start.go:364] duration metric: took 49.042µs to acquireMachinesLock for "default-k8s-diff-port-304000"
	I1216 12:44:05.644452    7957 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:44:05.644492    7957 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:44:05.653130    7957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:44:05.671332    7957 start.go:159] libmachine.API.Create for "default-k8s-diff-port-304000" (driver="qemu2")
	I1216 12:44:05.671357    7957 client.go:168] LocalClient.Create starting
	I1216 12:44:05.671429    7957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:44:05.671473    7957 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:05.671484    7957 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:05.671523    7957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:44:05.671554    7957 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:05.671565    7957 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:05.672007    7957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:44:05.927058    7957 main.go:141] libmachine: Creating SSH key...
	I1216 12:44:06.009633    7957 main.go:141] libmachine: Creating Disk image...
	I1216 12:44:06.009638    7957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:44:06.009891    7957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:06.019794    7957 main.go:141] libmachine: STDOUT: 
	I1216 12:44:06.019818    7957 main.go:141] libmachine: STDERR: 
	I1216 12:44:06.019876    7957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2 +20000M
	I1216 12:44:06.028451    7957 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:44:06.028467    7957 main.go:141] libmachine: STDERR: 
	I1216 12:44:06.028489    7957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:06.028496    7957 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:44:06.028508    7957 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:06.028537    7957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:e9:51:46:8b:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:06.030325    7957 main.go:141] libmachine: STDOUT: 
	I1216 12:44:06.030339    7957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:06.030358    7957 client.go:171] duration metric: took 358.991875ms to LocalClient.Create
	I1216 12:44:08.032525    7957 start.go:128] duration metric: took 2.388008333s to createHost
	I1216 12:44:08.032594    7957 start.go:83] releasing machines lock for "default-k8s-diff-port-304000", held for 2.388147625s
	W1216 12:44:08.032650    7957 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:08.058238    7957 out.go:177] * Deleting "default-k8s-diff-port-304000" in qemu2 ...
	W1216 12:44:08.117110    7957 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:08.117147    7957 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:13.119470    7957 start.go:360] acquireMachinesLock for default-k8s-diff-port-304000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:13.120086    7957 start.go:364] duration metric: took 485.583µs to acquireMachinesLock for "default-k8s-diff-port-304000"
	I1216 12:44:13.120233    7957 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:44:13.120538    7957 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:44:13.131248    7957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:44:13.179734    7957 start.go:159] libmachine.API.Create for "default-k8s-diff-port-304000" (driver="qemu2")
	I1216 12:44:13.179792    7957 client.go:168] LocalClient.Create starting
	I1216 12:44:13.179941    7957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:44:13.180031    7957 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:13.180052    7957 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:13.180122    7957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:44:13.180179    7957 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:13.180192    7957 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:13.182628    7957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:44:13.360226    7957 main.go:141] libmachine: Creating SSH key...
	I1216 12:44:13.402821    7957 main.go:141] libmachine: Creating Disk image...
	I1216 12:44:13.402827    7957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:44:13.403056    7957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:13.412964    7957 main.go:141] libmachine: STDOUT: 
	I1216 12:44:13.412988    7957 main.go:141] libmachine: STDERR: 
	I1216 12:44:13.413055    7957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2 +20000M
	I1216 12:44:13.421518    7957 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:44:13.421532    7957 main.go:141] libmachine: STDERR: 
	I1216 12:44:13.421544    7957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:13.421549    7957 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:44:13.421556    7957 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:13.421592    7957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:18:19:ec:28:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:13.423433    7957 main.go:141] libmachine: STDOUT: 
	I1216 12:44:13.423447    7957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:13.423461    7957 client.go:171] duration metric: took 243.662417ms to LocalClient.Create
	I1216 12:44:15.425809    7957 start.go:128] duration metric: took 2.305231042s to createHost
	I1216 12:44:15.425968    7957 start.go:83] releasing machines lock for "default-k8s-diff-port-304000", held for 2.305786208s
	W1216 12:44:15.426348    7957 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:15.434887    7957 out.go:201] 
	W1216 12:44:15.444086    7957 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:15.444139    7957 out.go:270] * 
	* 
	W1216 12:44:15.447110    7957 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:15.454949    7957 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (73.7345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-355000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (35.453084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-355000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-355000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-355000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.463625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-355000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-355000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (34.234625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-355000 image list --format=json
start_stop_delete_test.go:302: v1.32.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.0",
- 	"registry.k8s.io/kube-controller-manager:v1.32.0",
- 	"registry.k8s.io/kube-proxy:v1.32.0",
- 	"registry.k8s.io/kube-scheduler:v1.32.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (34.27425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-355000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-355000 --alsologtostderr -v=1: exit status 83 (50.179875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-355000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-355000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:08.393385    7979 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:08.393580    7979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:08.393584    7979 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:08.393586    7979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:08.393710    7979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:08.393946    7979 out.go:352] Setting JSON to false
	I1216 12:44:08.393954    7979 mustload.go:65] Loading cluster: embed-certs-355000
	I1216 12:44:08.394194    7979 config.go:182] Loaded profile config "embed-certs-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:08.399162    7979 out.go:177] * The control-plane node embed-certs-355000 host is not running: state=Stopped
	I1216 12:44:08.406219    7979 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-355000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p embed-certs-355000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (33.696417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (33.229209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-355000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (10.345150834s)

                                                
                                                
-- stdout --
	* [newest-cni-225000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-225000" primary control-plane node in "newest-cni-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:08.747976    7996 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:08.748169    7996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:08.748172    7996 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:08.748174    7996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:08.748290    7996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:08.749742    7996 out.go:352] Setting JSON to false
	I1216 12:44:08.769101    7996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4419,"bootTime":1734377429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:44:08.769177    7996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:44:08.773117    7996 out.go:177] * [newest-cni-225000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:44:08.781764    7996 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:44:08.781850    7996 notify.go:220] Checking for updates...
	I1216 12:44:08.790146    7996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:44:08.793224    7996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:44:08.796147    7996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:44:08.799118    7996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:44:08.802222    7996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:44:08.805460    7996 config.go:182] Loaded profile config "default-k8s-diff-port-304000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:08.805528    7996 config.go:182] Loaded profile config "multinode-148000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:08.805582    7996 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:44:08.810126    7996 out.go:177] * Using the qemu2 driver based on user configuration
	I1216 12:44:08.817062    7996 start.go:297] selected driver: qemu2
	I1216 12:44:08.817069    7996 start.go:901] validating driver "qemu2" against <nil>
	I1216 12:44:08.817075    7996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:44:08.819562    7996 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1216 12:44:08.819604    7996 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 12:44:08.826159    7996 out.go:177] * Automatically selected the socket_vmnet network
	I1216 12:44:08.829206    7996 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 12:44:08.829219    7996 cni.go:84] Creating CNI manager for ""
	I1216 12:44:08.829241    7996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:44:08.829249    7996 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 12:44:08.829284    7996 start.go:340] cluster config:
	{Name:newest-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:08.834278    7996 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:44:08.840999    7996 out.go:177] * Starting "newest-cni-225000" primary control-plane node in "newest-cni-225000" cluster
	I1216 12:44:08.845128    7996 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:44:08.845145    7996 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:44:08.845155    7996 cache.go:56] Caching tarball of preloaded images
	I1216 12:44:08.845227    7996 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:44:08.845232    7996 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:44:08.845284    7996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/newest-cni-225000/config.json ...
	I1216 12:44:08.845305    7996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/newest-cni-225000/config.json: {Name:mkc84b0319aa3b831c1c8c4edfd4adc3656a7a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 12:44:08.845900    7996 start.go:360] acquireMachinesLock for newest-cni-225000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:08.845951    7996 start.go:364] duration metric: took 45.042µs to acquireMachinesLock for "newest-cni-225000"
	I1216 12:44:08.845966    7996 start.go:93] Provisioning new machine with config: &{Name:newest-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:newest-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:44:08.846034    7996 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:44:08.850073    7996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:44:08.868295    7996 start.go:159] libmachine.API.Create for "newest-cni-225000" (driver="qemu2")
	I1216 12:44:08.868325    7996 client.go:168] LocalClient.Create starting
	I1216 12:44:08.868400    7996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:44:08.868442    7996 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:08.868457    7996 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:08.868493    7996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:44:08.868529    7996 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:08.868536    7996 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:08.869011    7996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:44:09.045347    7996 main.go:141] libmachine: Creating SSH key...
	I1216 12:44:09.288044    7996 main.go:141] libmachine: Creating Disk image...
	I1216 12:44:09.288054    7996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:44:09.288331    7996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:09.298995    7996 main.go:141] libmachine: STDOUT: 
	I1216 12:44:09.299024    7996 main.go:141] libmachine: STDERR: 
	I1216 12:44:09.299086    7996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2 +20000M
	I1216 12:44:09.307938    7996 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:44:09.307953    7996 main.go:141] libmachine: STDERR: 
	I1216 12:44:09.307974    7996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:09.307980    7996 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:44:09.307994    7996 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:09.308020    7996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b3:11:34:48:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:09.309860    7996 main.go:141] libmachine: STDOUT: 
	I1216 12:44:09.309873    7996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:09.309896    7996 client.go:171] duration metric: took 441.565292ms to LocalClient.Create
	I1216 12:44:11.312102    7996 start.go:128] duration metric: took 2.466046875s to createHost
	I1216 12:44:11.312171    7996 start.go:83] releasing machines lock for "newest-cni-225000", held for 2.466209334s
	W1216 12:44:11.312332    7996 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:11.322154    7996 out.go:177] * Deleting "newest-cni-225000" in qemu2 ...
	W1216 12:44:11.361276    7996 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:11.361314    7996 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:16.363572    7996 start.go:360] acquireMachinesLock for newest-cni-225000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:16.363957    7996 start.go:364] duration metric: took 303.125µs to acquireMachinesLock for "newest-cni-225000"
	I1216 12:44:16.364148    7996 start.go:93] Provisioning new machine with config: &{Name:newest-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.32.0 ClusterName:newest-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 12:44:16.364446    7996 start.go:125] createHost starting for "" (driver="qemu2")
	I1216 12:44:16.369149    7996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 12:44:16.417311    7996 start.go:159] libmachine.API.Create for "newest-cni-225000" (driver="qemu2")
	I1216 12:44:16.417363    7996 client.go:168] LocalClient.Create starting
	I1216 12:44:16.417464    7996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/ca.pem
	I1216 12:44:16.417519    7996 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:16.417536    7996 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:16.417613    7996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20091-990/.minikube/certs/cert.pem
	I1216 12:44:16.417644    7996 main.go:141] libmachine: Decoding PEM data...
	I1216 12:44:16.417658    7996 main.go:141] libmachine: Parsing certificate...
	I1216 12:44:16.418385    7996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20091-990/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso...
	I1216 12:44:16.641995    7996 main.go:141] libmachine: Creating SSH key...
	I1216 12:44:16.987942    7996 main.go:141] libmachine: Creating Disk image...
	I1216 12:44:16.987954    7996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1216 12:44:16.988237    7996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:16.998809    7996 main.go:141] libmachine: STDOUT: 
	I1216 12:44:16.998825    7996 main.go:141] libmachine: STDERR: 
	I1216 12:44:16.998993    7996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2 +20000M
	I1216 12:44:17.007563    7996 main.go:141] libmachine: STDOUT: Image resized.
	
	I1216 12:44:17.007581    7996 main.go:141] libmachine: STDERR: 
	I1216 12:44:17.007595    7996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:17.007601    7996 main.go:141] libmachine: Starting QEMU VM...
	I1216 12:44:17.007610    7996 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:17.007657    7996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:55:29:97:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:17.009597    7996 main.go:141] libmachine: STDOUT: 
	I1216 12:44:17.009610    7996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:17.009624    7996 client.go:171] duration metric: took 592.254375ms to LocalClient.Create
	I1216 12:44:19.011801    7996 start.go:128] duration metric: took 2.647323625s to createHost
	I1216 12:44:19.011862    7996 start.go:83] releasing machines lock for "newest-cni-225000", held for 2.647882s
	W1216 12:44:19.012158    7996 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:19.023661    7996 out.go:201] 
	W1216 12:44:19.031727    7996 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:19.031785    7996 out.go:270] * 
	* 
	W1216 12:44:19.034976    7996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:19.043683    7996 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (72.129208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-304000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304000 create -f testdata/busybox.yaml: exit status 1 (33.316125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-304000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context default-k8s-diff-port-304000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (33.508208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (33.106333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-304000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-304000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304000 describe deploy/metrics-server -n kube-system: exit status 1 (28.230667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-304000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-304000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (33.369042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (6.316568042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-304000" primary control-plane node in "default-k8s-diff-port-304000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-304000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-304000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:17.817962    8043 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:17.818138    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:17.818141    8043 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:17.818144    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:17.818281    8043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:17.819443    8043 out.go:352] Setting JSON to false
	I1216 12:44:17.839141    8043 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4428,"bootTime":1734377429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:44:17.839217    8043 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:44:17.844169    8043 out.go:177] * [default-k8s-diff-port-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:44:17.853125    8043 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:44:17.853161    8043 notify.go:220] Checking for updates...
	I1216 12:44:17.861075    8043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:44:17.864134    8043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:44:17.867088    8043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:44:17.870094    8043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:44:17.873104    8043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:44:17.876352    8043 config.go:182] Loaded profile config "default-k8s-diff-port-304000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:17.876618    8043 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:44:17.880036    8043 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:44:17.886066    8043 start.go:297] selected driver: qemu2
	I1216 12:44:17.886073    8043 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-304000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:17.886133    8043 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:44:17.888995    8043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 12:44:17.889023    8043 cni.go:84] Creating CNI manager for ""
	I1216 12:44:17.889044    8043 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:44:17.889069    8043 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-304000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:17.893682    8043 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:44:17.902106    8043 out.go:177] * Starting "default-k8s-diff-port-304000" primary control-plane node in "default-k8s-diff-port-304000" cluster
	I1216 12:44:17.905082    8043 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:44:17.905096    8043 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:44:17.905107    8043 cache.go:56] Caching tarball of preloaded images
	I1216 12:44:17.905198    8043 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:44:17.905213    8043 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:44:17.905263    8043 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/default-k8s-diff-port-304000/config.json ...
	I1216 12:44:17.905796    8043 start.go:360] acquireMachinesLock for default-k8s-diff-port-304000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:19.011976    8043 start.go:364] duration metric: took 1.106152583s to acquireMachinesLock for "default-k8s-diff-port-304000"
	I1216 12:44:19.012192    8043 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:19.012225    8043 fix.go:54] fixHost starting: 
	I1216 12:44:19.012992    8043 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304000: state=Stopped err=<nil>
	W1216 12:44:19.013039    8043 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:19.023627    8043 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-304000" ...
	I1216 12:44:19.027714    8043 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:19.028013    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:18:19:ec:28:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:19.038847    8043 main.go:141] libmachine: STDOUT: 
	I1216 12:44:19.038918    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:19.039038    8043 fix.go:56] duration metric: took 26.819166ms for fixHost
	I1216 12:44:19.039059    8043 start.go:83] releasing machines lock for "default-k8s-diff-port-304000", held for 27.047375ms
	W1216 12:44:19.039085    8043 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:19.039323    8043 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:19.039341    8043 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:24.041480    8043 start.go:360] acquireMachinesLock for default-k8s-diff-port-304000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:24.041907    8043 start.go:364] duration metric: took 316.375µs to acquireMachinesLock for "default-k8s-diff-port-304000"
	I1216 12:44:24.042067    8043 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:24.042092    8043 fix.go:54] fixHost starting: 
	I1216 12:44:24.042881    8043 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304000: state=Stopped err=<nil>
	W1216 12:44:24.042906    8043 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:24.052713    8043 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-304000" ...
	I1216 12:44:24.056590    8043 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:24.056788    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:18:19:ec:28:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/default-k8s-diff-port-304000/disk.qcow2
	I1216 12:44:24.067089    8043 main.go:141] libmachine: STDOUT: 
	I1216 12:44:24.067177    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:24.067258    8043 fix.go:56] duration metric: took 25.172667ms for fixHost
	I1216 12:44:24.067278    8043 start.go:83] releasing machines lock for "default-k8s-diff-port-304000", held for 25.347167ms
	W1216 12:44:24.067503    8043 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-304000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-304000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:24.074654    8043 out.go:201] 
	W1216 12:44:24.078653    8043 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:24.078696    8043 out.go:270] * 
	* 
	W1216 12:44:24.081554    8043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:24.088633    8043 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-304000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (71.180042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0: exit status 80 (5.190467125s)

                                                
                                                
-- stdout --
	* [newest-cni-225000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-225000" primary control-plane node in "newest-cni-225000" cluster
	* Restarting existing qemu2 VM for "newest-cni-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:22.403852    8076 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:22.404008    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:22.404011    8076 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:22.404014    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:22.404161    8076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:22.405339    8076 out.go:352] Setting JSON to false
	I1216 12:44:22.423192    8076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4433,"bootTime":1734377429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 12:44:22.423275    8076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 12:44:22.428048    8076 out.go:177] * [newest-cni-225000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 12:44:22.433973    8076 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 12:44:22.434048    8076 notify.go:220] Checking for updates...
	I1216 12:44:22.441009    8076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 12:44:22.444018    8076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 12:44:22.447055    8076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:44:22.450029    8076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 12:44:22.452983    8076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:44:22.456223    8076 config.go:182] Loaded profile config "newest-cni-225000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:22.456498    8076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:44:22.460999    8076 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 12:44:22.468009    8076 start.go:297] selected driver: qemu2
	I1216 12:44:22.468016    8076 start.go:901] validating driver "qemu2" against &{Name:newest-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:newest-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:22.468090    8076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:44:22.470676    8076 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 12:44:22.470703    8076 cni.go:84] Creating CNI manager for ""
	I1216 12:44:22.470727    8076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 12:44:22.470752    8076 start.go:340] cluster config:
	{Name:newest-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-225000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 12:44:22.475223    8076 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 12:44:22.482965    8076 out.go:177] * Starting "newest-cni-225000" primary control-plane node in "newest-cni-225000" cluster
	I1216 12:44:22.486065    8076 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 12:44:22.486081    8076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 12:44:22.486099    8076 cache.go:56] Caching tarball of preloaded images
	I1216 12:44:22.486186    8076 preload.go:172] Found /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 12:44:22.486192    8076 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 12:44:22.486250    8076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/newest-cni-225000/config.json ...
	I1216 12:44:22.486744    8076 start.go:360] acquireMachinesLock for newest-cni-225000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:22.486775    8076 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "newest-cni-225000"
	I1216 12:44:22.486784    8076 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:22.486790    8076 fix.go:54] fixHost starting: 
	I1216 12:44:22.486910    8076 fix.go:112] recreateIfNeeded on newest-cni-225000: state=Stopped err=<nil>
	W1216 12:44:22.486918    8076 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:22.491007    8076 out.go:177] * Restarting existing qemu2 VM for "newest-cni-225000" ...
	I1216 12:44:22.498926    8076 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:22.498957    8076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:55:29:97:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:22.501178    8076 main.go:141] libmachine: STDOUT: 
	I1216 12:44:22.501202    8076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:22.501232    8076 fix.go:56] duration metric: took 14.441292ms for fixHost
	I1216 12:44:22.501238    8076 start.go:83] releasing machines lock for "newest-cni-225000", held for 14.458458ms
	W1216 12:44:22.501243    8076 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:22.501297    8076 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:22.501302    8076 start.go:729] Will try again in 5 seconds ...
	I1216 12:44:27.503488    8076 start.go:360] acquireMachinesLock for newest-cni-225000: {Name:mk9a3288c9431988222651c8b7fae2aeac2ce54f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 12:44:27.503993    8076 start.go:364] duration metric: took 428.459µs to acquireMachinesLock for "newest-cni-225000"
	I1216 12:44:27.504133    8076 start.go:96] Skipping create...Using existing machine configuration
	I1216 12:44:27.504152    8076 fix.go:54] fixHost starting: 
	I1216 12:44:27.504943    8076 fix.go:112] recreateIfNeeded on newest-cni-225000: state=Stopped err=<nil>
	W1216 12:44:27.504970    8076 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 12:44:27.508546    8076 out.go:177] * Restarting existing qemu2 VM for "newest-cni-225000" ...
	I1216 12:44:27.517347    8076 qemu.go:418] Using hvf for hardware acceleration
	I1216 12:44:27.517545    8076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:55:29:97:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20091-990/.minikube/machines/newest-cni-225000/disk.qcow2
	I1216 12:44:27.528593    8076 main.go:141] libmachine: STDOUT: 
	I1216 12:44:27.528657    8076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1216 12:44:27.528742    8076 fix.go:56] duration metric: took 24.590708ms for fixHost
	I1216 12:44:27.528761    8076 start.go:83] releasing machines lock for "newest-cni-225000", held for 24.742167ms
	W1216 12:44:27.528942    8076 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1216 12:44:27.535349    8076 out.go:201] 
	W1216 12:44:27.538421    8076 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1216 12:44:27.538438    8076 out.go:270] * 
	* 
	W1216 12:44:27.540639    8076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:44:27.549280    8076 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-225000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.32.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (72.395875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-304000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (35.472041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-304000" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-304000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.316291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-304000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-304000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (32.755708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-304000 image list --format=json
start_stop_delete_test.go:302: v1.32.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.0",
- 	"registry.k8s.io/kube-controller-manager:v1.32.0",
- 	"registry.k8s.io/kube-proxy:v1.32.0",
- 	"registry.k8s.io/kube-scheduler:v1.32.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (32.57975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-304000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-304000 --alsologtostderr -v=1: exit status 83 (44.917542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-304000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-304000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:24.378737    8097 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:24.378931    8097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:24.378934    8097 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:24.378937    8097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:24.379096    8097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:24.379320    8097 out.go:352] Setting JSON to false
	I1216 12:44:24.379327    8097 mustload.go:65] Loading cluster: default-k8s-diff-port-304000
	I1216 12:44:24.379554    8097 config.go:182] Loaded profile config "default-k8s-diff-port-304000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:24.384385    8097 out.go:177] * The control-plane node default-k8s-diff-port-304000 host is not running: state=Stopped
	I1216 12:44:24.387255    8097 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-304000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-304000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (33.072209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (33.23075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-225000 image list --format=json
start_stop_delete_test.go:302: v1.32.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.0",
- 	"registry.k8s.io/kube-controller-manager:v1.32.0",
- 	"registry.k8s.io/kube-proxy:v1.32.0",
- 	"registry.k8s.io/kube-scheduler:v1.32.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (34.561041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-225000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-225000 --alsologtostderr -v=1: exit status 83 (46.848042ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-225000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-225000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:44:27.751930    8121 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:44:27.752129    8121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:27.752132    8121 out.go:358] Setting ErrFile to fd 2...
	I1216 12:44:27.752135    8121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:44:27.752273    8121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 12:44:27.752526    8121 out.go:352] Setting JSON to false
	I1216 12:44:27.752533    8121 mustload.go:65] Loading cluster: newest-cni-225000
	I1216 12:44:27.752752    8121 config.go:182] Loaded profile config "newest-cni-225000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 12:44:27.757535    8121 out.go:177] * The control-plane node newest-cni-225000 host is not running: state=Stopped
	I1216 12:44:27.761624    8121 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-225000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-darwin-arm64 pause -p newest-cni-225000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (35.281792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (35.4295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (152/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.32.0/json-events 8.58
13 TestDownloadOnly/v1.32.0/preload-exists 0
16 TestDownloadOnly/v1.32.0/kubectl 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.08
18 TestDownloadOnly/v1.32.0/DeleteAll 0.12
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 199.31
29 TestAddons/serial/Volcano 39.14
31 TestAddons/serial/GCPAuth/Namespaces 0.09
32 TestAddons/serial/GCPAuth/FakeCredentials 9.41
35 TestAddons/parallel/Registry 14.63
36 TestAddons/parallel/Ingress 19.76
37 TestAddons/parallel/InspektorGadget 10.3
38 TestAddons/parallel/MetricsServer 6.29
40 TestAddons/parallel/CSI 35.58
41 TestAddons/parallel/Headlamp 17.62
42 TestAddons/parallel/CloudSpanner 6.22
43 TestAddons/parallel/LocalPath 52.19
44 TestAddons/parallel/NvidiaDevicePlugin 6.2
45 TestAddons/parallel/Yakd 10.28
47 TestAddons/StoppedEnableDisable 12.44
55 TestHyperKitDriverInstallOrUpdate 11.49
58 TestErrorSpam/setup 36.26
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.26
61 TestErrorSpam/pause 0.66
62 TestErrorSpam/unpause 0.6
63 TestErrorSpam/stop 64.3
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.71
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.99
70 TestFunctional/serial/KubeContext 0.03
71 TestFunctional/serial/KubectlGetPods 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.16
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.08
81 TestFunctional/serial/MinikubeKubectlCmd 0.78
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.27
83 TestFunctional/serial/ExtraConfig 38.17
84 TestFunctional/serial/ComponentHealth 0.04
85 TestFunctional/serial/LogsCmd 0.66
86 TestFunctional/serial/LogsFileCmd 0.65
87 TestFunctional/serial/InvalidService 4.36
89 TestFunctional/parallel/ConfigCmd 0.25
90 TestFunctional/parallel/DashboardCmd 9.89
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.25
98 TestFunctional/parallel/AddonsCmd 0.11
99 TestFunctional/parallel/PersistentVolumeClaim 24.33
101 TestFunctional/parallel/SSHCmd 0.13
102 TestFunctional/parallel/CpCmd 0.43
104 TestFunctional/parallel/FileSync 0.07
105 TestFunctional/parallel/CertSync 0.4
109 TestFunctional/parallel/NodeLabels 0.04
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.19
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.95
121 TestFunctional/parallel/ImageCommands/Setup 1.8
122 TestFunctional/parallel/DockerEnv/bash 0.34
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.13
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.68
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.35
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.33
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.21
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
145 TestFunctional/parallel/ServiceCmd/List 0.31
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
148 TestFunctional/parallel/ServiceCmd/Format 0.1
149 TestFunctional/parallel/ServiceCmd/URL 0.1
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
151 TestFunctional/parallel/ProfileCmd/profile_list 0.14
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
153 TestFunctional/parallel/MountCmd/any-port 7.17
154 TestFunctional/parallel/MountCmd/specific-port 0.9
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.01
158 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/CopyFile 0.04
176 TestImageBuild/serial/Setup 34.18
177 TestImageBuild/serial/NormalBuild 1.39
178 TestImageBuild/serial/BuildWithBuildArg 0.4
179 TestImageBuild/serial/BuildWithDockerIgnore 0.34
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
185 TestJSONOutput/start/Audit 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
212 TestMainNoArgs 0.04
213 TestMinikubeProfile 75.64
259 TestStoppedBinaryUpgrade/Setup 1.11
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.39
277 TestNoKubernetes/serial/Stop 3.54
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
294 TestStartStop/group/old-k8s-version/serial/Stop 3.5
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
307 TestStartStop/group/no-preload/serial/Stop 1.79
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
312 TestStartStop/group/embed-certs/serial/Stop 1.87
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.88
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 3.04
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 11:34:59.053137    1494 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1216 11:34:59.053985    1494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-651000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-651000: exit status 85 (102.125833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-651000 | jenkins | v1.34.0 | 16 Dec 24 11:34 PST |          |
	|         | -p download-only-651000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:34:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:34:30.123336    1495 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:34:30.123499    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:30.123503    1495 out.go:358] Setting ErrFile to fd 2...
	I1216 11:34:30.123505    1495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:30.123637    1495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	W1216 11:34:30.123703    1495 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20091-990/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20091-990/.minikube/config/config.json: no such file or directory
	I1216 11:34:30.125091    1495 out.go:352] Setting JSON to true
	I1216 11:34:30.144093    1495 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":241,"bootTime":1734377429,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:34:30.144162    1495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:34:30.148957    1495 out.go:97] [download-only-651000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:34:30.149162    1495 notify.go:220] Checking for updates...
	W1216 11:34:30.149203    1495 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 11:34:30.153033    1495 out.go:169] MINIKUBE_LOCATION=20091
	I1216 11:34:30.160084    1495 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:34:30.164965    1495 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:34:30.169014    1495 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:34:30.171919    1495 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	W1216 11:34:30.177984    1495 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 11:34:30.178225    1495 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:34:30.180708    1495 out.go:97] Using the qemu2 driver based on user configuration
	I1216 11:34:30.180728    1495 start.go:297] selected driver: qemu2
	I1216 11:34:30.180751    1495 start.go:901] validating driver "qemu2" against <nil>
	I1216 11:34:30.180829    1495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:34:30.183970    1495 out.go:169] Automatically selected the socket_vmnet network
	I1216 11:34:30.189870    1495 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 11:34:30.189970    1495 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:34:30.189999    1495 cni.go:84] Creating CNI manager for ""
	I1216 11:34:30.190042    1495 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 11:34:30.190105    1495 start.go:340] cluster config:
	{Name:download-only-651000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-651000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:34:30.194631    1495 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:34:30.198991    1495 out.go:97] Downloading VM boot image ...
	I1216 11:34:30.199015    1495 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/iso/arm64/minikube-v1.34.0-1734029574-20090-arm64.iso
	I1216 11:34:41.876049    1495 out.go:97] Starting "download-only-651000" primary control-plane node in "download-only-651000" cluster
	I1216 11:34:41.876070    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:41.933921    1495 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 11:34:41.933944    1495 cache.go:56] Caching tarball of preloaded images
	I1216 11:34:41.934118    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:41.939216    1495 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 11:34:41.939222    1495 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:42.021226    1495 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 11:34:57.707195    1495 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:57.707394    1495 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:58.401718    1495 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1216 11:34:58.401916    1495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/download-only-651000/config.json ...
	I1216 11:34:58.401932    1495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/download-only-651000/config.json: {Name:mkc47c5693f7ee3d018304764c489840134397e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:34:58.402219    1495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 11:34:58.402469    1495 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1216 11:34:59.004361    1495 out.go:193] 
	W1216 11:34:59.010242    1495 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20091-990/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600 0x109410600] Decompressors:map[bz2:0x14000737d00 gz:0x14000737d08 tar:0x14000737cb0 tar.bz2:0x14000737cc0 tar.gz:0x14000737cd0 tar.xz:0x14000737ce0 tar.zst:0x14000737cf0 tbz2:0x14000737cc0 tgz:0x14000737cd0 txz:0x14000737ce0 tzst:0x14000737cf0 xz:0x14000737d10 zip:0x14000737d20 zst:0x14000737d18] Getters:map[file:0x14000a666d0 http:0x140008880a0 https:0x140008880f0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1216 11:34:59.010266    1495 out_reason.go:110] 
	W1216 11:34:59.018180    1495 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:34:59.021101    1495 out.go:193] 
	
	
	* The control-plane node download-only-651000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-651000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-651000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (8.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-800000 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-800000 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=docker --driver=qemu2 : (8.574956875s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (8.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I1216 11:35:08.011964    1494 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
I1216 11:35:08.012013    1494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
--- PASS: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-800000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-800000: exit status 85 (82.679625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-651000 | jenkins | v1.34.0 | 16 Dec 24 11:34 PST |                     |
	|         | -p download-only-651000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 11:34 PST | 16 Dec 24 11:34 PST |
	| delete  | -p download-only-651000        | download-only-651000 | jenkins | v1.34.0 | 16 Dec 24 11:34 PST | 16 Dec 24 11:34 PST |
	| start   | -o=json --download-only        | download-only-800000 | jenkins | v1.34.0 | 16 Dec 24 11:34 PST |                     |
	|         | -p download-only-800000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:34:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:34:59.468665    1537 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:34:59.468819    1537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:59.468823    1537 out.go:358] Setting ErrFile to fd 2...
	I1216 11:34:59.468830    1537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:59.468980    1537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:34:59.470151    1537 out.go:352] Setting JSON to true
	I1216 11:34:59.487792    1537 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":270,"bootTime":1734377429,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:34:59.487875    1537 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:34:59.493019    1537 out.go:97] [download-only-800000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:34:59.493127    1537 notify.go:220] Checking for updates...
	I1216 11:34:59.496939    1537 out.go:169] MINIKUBE_LOCATION=20091
	I1216 11:34:59.499965    1537 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:34:59.502978    1537 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:34:59.506941    1537 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:34:59.510983    1537 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	W1216 11:34:59.516931    1537 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 11:34:59.517089    1537 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:34:59.519927    1537 out.go:97] Using the qemu2 driver based on user configuration
	I1216 11:34:59.519934    1537 start.go:297] selected driver: qemu2
	I1216 11:34:59.519938    1537 start.go:901] validating driver "qemu2" against <nil>
	I1216 11:34:59.519974    1537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:34:59.522830    1537 out.go:169] Automatically selected the socket_vmnet network
	I1216 11:34:59.528289    1537 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1216 11:34:59.528393    1537 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:34:59.528411    1537 cni.go:84] Creating CNI manager for ""
	I1216 11:34:59.528435    1537 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 11:34:59.528444    1537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 11:34:59.528488    1537 start.go:340] cluster config:
	{Name:download-only-800000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-800000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:34:59.532942    1537 iso.go:125] acquiring lock: {Name:mkec1f98c7472c31399991ac5f2663618fb5f5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:34:59.536005    1537 out.go:97] Starting "download-only-800000" primary control-plane node in "download-only-800000" cluster
	I1216 11:34:59.536018    1537 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 11:34:59.594558    1537 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 11:34:59.594584    1537 cache.go:56] Caching tarball of preloaded images
	I1216 11:34:59.594771    1537 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 11:34:59.597934    1537 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I1216 11:34:59.597941    1537 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 11:34:59.675115    1537 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4?checksum=md5:ff0c92f745fa493248e668330d02c326 -> /Users/jenkins/minikube-integration/20091-990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-800000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-800000
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 11:35:08.540293    1494 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-687000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-687000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-066000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-066000: exit status 85 (61.395917ms)

                                                
                                                
-- stdout --
	* Profile "addons-066000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-066000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-066000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-066000: exit status 85 (64.096708ms)

                                                
                                                
-- stdout --
	* Profile "addons-066000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-066000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (199.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-066000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-066000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.31167575s)
--- PASS: TestAddons/Setup (199.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 7.622459ms
addons_test.go:815: volcano-admission stabilized in 7.664459ms
addons_test.go:823: volcano-controller stabilized in 7.677917ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-tvhcr" [85c478d7-0f60-412f-96d7-3945ede103eb] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004387292s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-bbnsb" [5a5665d3-4dc2-4b15-bcda-2ad6f8770121] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006373792s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-jwj4n" [38a87551-28e6-4219-aa67-8a6127ce33f2] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00549525s
addons_test.go:842: (dbg) Run:  kubectl --context addons-066000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-066000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-066000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [807bee48-c7fa-4b63-a23e-68cba90441d3] Pending
helpers_test.go:344: "test-job-nginx-0" [807bee48-c7fa-4b63-a23e-68cba90441d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [807bee48-c7fa-4b63-a23e-68cba90441d3] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.006475875s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable volcano --alsologtostderr -v=1: (10.852446417s)
--- PASS: TestAddons/serial/Volcano (39.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-066000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-066000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-066000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-066000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f39cc96-850c-4d57-b269-adfd99793e98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f39cc96-850c-4d57-b269-adfd99793e98] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.010103875s
addons_test.go:633: (dbg) Run:  kubectl --context addons-066000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-066000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-066000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-066000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 59.985167ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-hb4gf" [64948340-a6e9-46cc-b9f8-2da83ec199ca] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003898459s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kqcgz" [e14b22ab-4734-4aa6-a618-0c9f943a5a5c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.019769042s
addons_test.go:331: (dbg) Run:  kubectl --context addons-066000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-066000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-066000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.09226975s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 ip
2024/12/16 11:39:40 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-066000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-066000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-066000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [82c4974b-9807-493b-b031-60ea21a30642] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [82c4974b-9807-493b-b031-60ea21a30642] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009911083s
I1216 11:40:53.437518    1494 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-066000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable ingress --alsologtostderr -v=1: (7.285527125s)
--- PASS: TestAddons/parallel/Ingress (19.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ndtq8" [99f29c65-c83a-4c16-bfe4-df5ed28318ca] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012182583s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.2837495s)
--- PASS: TestAddons/parallel/InspektorGadget (10.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.835041ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-zbkgd" [043054f8-f28d-47d5-a680-3302cd534ea5] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007998292s
addons_test.go:402: (dbg) Run:  kubectl --context addons-066000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 11:40:03.520311    1494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 11:40:03.522895    1494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 11:40:03.522904    1494 kapi.go:107] duration metric: took 2.62975ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.63325ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-066000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-066000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2640776a-9b90-4baa-a114-08e3b346e35f] Pending
helpers_test.go:344: "task-pv-pod" [2640776a-9b90-4baa-a114-08e3b346e35f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2640776a-9b90-4baa-a114-08e3b346e35f] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.010185542s
addons_test.go:511: (dbg) Run:  kubectl --context addons-066000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-066000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-066000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-066000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-066000 delete pod task-pv-pod: (1.108788084s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-066000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-066000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-066000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fc719e2f-2b99-49fc-b3c1-0aff8c991ae6] Pending
helpers_test.go:344: "task-pv-pod-restore" [fc719e2f-2b99-49fc-b3c1-0aff8c991ae6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fc719e2f-2b99-49fc-b3c1-0aff8c991ae6] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007925791s
addons_test.go:553: (dbg) Run:  kubectl --context addons-066000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-066000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-066000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.146840208s)
--- PASS: TestAddons/parallel/CSI (35.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-066000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-dwr2j" [1dbe2fb0-5216-43f4-9fc8-e39df8c61da6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-dwr2j" [1dbe2fb0-5216-43f4-9fc8-e39df8c61da6] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005107208s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable headlamp --alsologtostderr -v=1: (5.25169725s)
--- PASS: TestAddons/parallel/Headlamp (17.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5498fbc9c4-h9jl4" [f9f67cf3-5786-4ded-9199-830733a42d3b] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003708625s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-066000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-066000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [73f0d964-c03b-4cbe-9f2c-355db99228a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [73f0d964-c03b-4cbe-9f2c-355db99228a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [73f0d964-c03b-4cbe-9f2c-355db99228a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0102875s
addons_test.go:906: (dbg) Run:  kubectl --context addons-066000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 ssh "cat /opt/local-path-provisioner/pvc-00a1f57a-491f-4236-98fc-7b3776929fd8_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-066000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-066000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.6121655s)
--- PASS: TestAddons/parallel/LocalPath (52.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gxjqg" [5c31b4ee-4fc0-4230-b4c7-f20d6ab1c0df] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00853875s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-r7pgc" [72dbec1b-67e0-42d1-ac32-203e27745ccc] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007232458s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-066000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-066000 addons disable yakd --alsologtostderr -v=1: (5.275712583s)
--- PASS: TestAddons/parallel/Yakd (10.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-066000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-066000: (12.239792417s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-066000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-066000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-066000
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1216 12:29:41.131437    1494 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 12:29:41.131654    1494 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1216 12:29:43.151337    1494 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1216 12:29:43.151589    1494 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1216 12:29:43.151639    1494 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit
I1216 12:29:43.698074    1494 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900 0x10932d900] Decompressors:map[bz2:0x140007b3450 gz:0x140007b3458 tar:0x140007b3400 tar.bz2:0x140007b3410 tar.gz:0x140007b3420 tar.xz:0x140007b3430 tar.zst:0x140007b3440 tbz2:0x140007b3410 tgz:0x140007b3420 txz:0x140007b3430 tzst:0x140007b3440 xz:0x140007b3460 zip:0x140007b3470 zst:0x140007b3468] Getters:map[file:0x1400048a740 http:0x140007c1ef0 https:0x14000492000] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1216 12:29:43.698224    1494 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3988687064/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.49s)

                                                
                                    
x
+
TestErrorSpam/setup (36.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-573000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-573000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 --driver=qemu2 : (36.26286775s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.30.2, which may have incompatibilities with Kubernetes 1.32.0."
--- PASS: TestErrorSpam/setup (36.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop: (12.213952916s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop: (26.039550333s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-573000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-573000 stop: (26.041217s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20091-990/.minikube/files/etc/test/nested/copy/1494/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1216 11:43:28.279571    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.287171    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.300525    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.323927    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.367350    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.450776    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.614134    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:28.937660    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:29.581145    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:30.864607    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:33.426767    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:43:38.550212    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-278000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.706630625s)
--- PASS: TestFunctional/serial/StartWithProxy (49.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 11:43:47.205678    1494 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --alsologtostderr -v=8
E1216 11:43:48.794039    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:44:09.277681    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-278000 --alsologtostderr -v=8: (37.986035208s)
functional_test.go:663: soft start took 37.986465458s for "functional-278000" cluster.
I1216 11:44:25.191587    1494 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (37.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-278000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-278000 cache add registry.k8s.io/pause:3.1: (1.228346042s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-278000 cache add registry.k8s.io/pause:3.3: (1.084605625s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3512374627/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache add minikube-local-cache-test:functional-278000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache delete minikube-local-cache-test:functional-278000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-278000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.577416ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 kubectl -- --context functional-278000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-278000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-278000 get pods: (1.271012s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.27s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 11:44:50.240532    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-278000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.171892875s)
functional_test.go:761: restart took 38.171982334s for "functional-278000" cluster.
I1216 11:45:10.781774    1494 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (38.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-278000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2899856381/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-278000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-278000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-278000: exit status 115 (146.803125ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30106 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-278000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-278000 delete -f testdata/invalidsvc.yaml: (1.110369708s)
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 config get cpus: exit status 14 (33.909875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 config get cpus: exit status 14 (35.619417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-278000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-278000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2388: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-278000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.672084ms)

                                                
                                                
-- stdout --
	* [functional-278000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:46:03.397845    2375 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:46:03.398018    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.398021    2375 out.go:358] Setting ErrFile to fd 2...
	I1216 11:46:03.398023    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.398146    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:46:03.399294    2375 out.go:352] Setting JSON to false
	I1216 11:46:03.417270    2375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":934,"bootTime":1734377429,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:46:03.417377    2375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:46:03.422211    2375 out.go:177] * [functional-278000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1216 11:46:03.429261    2375 notify.go:220] Checking for updates...
	I1216 11:46:03.433154    2375 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 11:46:03.437244    2375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:46:03.441051    2375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:46:03.444210    2375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:46:03.447233    2375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 11:46:03.450254    2375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:46:03.453606    2375 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:46:03.453850    2375 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:46:03.458249    2375 out.go:177] * Using the qemu2 driver based on existing profile
	I1216 11:46:03.465224    2375 start.go:297] selected driver: qemu2
	I1216 11:46:03.465230    2375 start.go:901] validating driver "qemu2" against &{Name:functional-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:functional-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:46:03.465284    2375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:46:03.471247    2375 out.go:201] 
	W1216 11:46:03.475200    2375 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 11:46:03.479257    2375 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-278000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-278000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.723166ms)

                                                
                                                
-- stdout --
	* [functional-278000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:46:03.272672    2371 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:46:03.272824    2371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.272827    2371 out.go:358] Setting ErrFile to fd 2...
	I1216 11:46:03.272829    2371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:46:03.272961    2371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
	I1216 11:46:03.274522    2371 out.go:352] Setting JSON to false
	I1216 11:46:03.294462    2371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":934,"bootTime":1734377429,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1216 11:46:03.294555    2371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1216 11:46:03.299100    2371 out.go:177] * [functional-278000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1216 11:46:03.306303    2371 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 11:46:03.306391    2371 notify.go:220] Checking for updates...
	I1216 11:46:03.314247    2371 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	I1216 11:46:03.318194    2371 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1216 11:46:03.321311    2371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:46:03.324222    2371 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	I1216 11:46:03.327250    2371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:46:03.330572    2371 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 11:46:03.330822    2371 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:46:03.335219    2371 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1216 11:46:03.342200    2371 start.go:297] selected driver: qemu2
	I1216 11:46:03.342206    2371 start.go:901] validating driver "qemu2" against &{Name:functional-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:functional-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:46:03.342251    2371 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:46:03.348123    2371 out.go:201] 
	W1216 11:46:03.352229    2371 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 11:46:03.356213    2371 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ea1f1506-9706-4d05-8f98-b4b17dd012c6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010277666s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-278000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-278000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-278000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-278000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3dbdcf14-2dff-4da8-892e-906bff9cf12c] Pending
helpers_test.go:344: "sp-pod" [3dbdcf14-2dff-4da8-892e-906bff9cf12c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3dbdcf14-2dff-4da8-892e-906bff9cf12c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004802666s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-278000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-278000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-278000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8d0a172-de03-4e0e-a8dc-bf1f7716b0f0] Pending
helpers_test.go:344: "sp-pod" [f8d0a172-de03-4e0e-a8dc-bf1f7716b0f0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8d0a172-de03-4e0e-a8dc-bf1f7716b0f0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006346s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-278000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh -n functional-278000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cp functional-278000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3467955671/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh -n functional-278000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh -n functional-278000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1494/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /etc/test/nested/copy/1494/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1494.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /etc/ssl/certs/1494.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1494.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /usr/share/ca-certificates/1494.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /etc/ssl/certs/14942.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /usr/share/ca-certificates/14942.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-278000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh "sudo systemctl is-active crio": exit status 1 (119.642958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-278000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-278000
docker.io/kicbase/echo-server:functional-278000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-278000 image ls --format short --alsologtostderr:
I1216 11:46:07.597134    2410 out.go:345] Setting OutFile to fd 1 ...
I1216 11:46:07.597579    2410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.597583    2410 out.go:358] Setting ErrFile to fd 2...
I1216 11:46:07.597585    2410 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.597817    2410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:46:07.598226    2410 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.598289    2410 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.599237    2410 ssh_runner.go:195] Run: systemctl --version
I1216 11:46:07.599247    2410 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/functional-278000/id_rsa Username:docker}
I1216 11:46:07.626418    2410 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-278000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.16-0          | 7fc9d4aa817aa | 142MB  |
| docker.io/kicbase/echo-server               | functional-278000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| localhost/my-image                          | functional-278000 | 223422e44b927 | 1.41MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.32.0           | 2b5bd0f16085a | 93.9MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0           | a8d049396f6b8 | 87.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.0           | 2f50386e20bfd | 97.1MB |
| docker.io/library/minikube-local-cache-test | functional-278000 | a11cb486a38ad | 30B    |
| docker.io/library/nginx                     | alpine            | dba92e6b64886 | 56.9MB |
| docker.io/library/nginx                     | latest            | bdf62fd3a32f1 | 197MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.32.0           | c3ff26fb59f37 | 67.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-278000 image ls --format table --alsologtostderr:
I1216 11:46:09.776624    2423 out.go:345] Setting OutFile to fd 1 ...
I1216 11:46:09.776822    2423 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:09.776826    2423 out.go:358] Setting ErrFile to fd 2...
I1216 11:46:09.776828    2423 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:09.776992    2423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:46:09.777442    2423 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:09.777508    2423 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:09.778421    2423 ssh_runner.go:195] Run: systemctl --version
I1216 11:46:09.778433    2423 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/functional-278000/id_rsa Username:docker}
I1216 11:46:09.800157    2423 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E1216 11:46:12.163272    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
2024/12/16 11:46:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-278000 image ls --format json --alsologtostderr:
[{"id":"2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"97100000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"87200000"},{"id":"c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"67900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919
f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"223422e44b9273ba9ab9639310673c51ab33afa5eb8fb5db3e892408d7164003","repoDigests":[],"repoTags":["localhost/my-image:functional-278000"],"size":"1410000"},{"id":"2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"93900000"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],
"size":"56900000"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"142000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-278000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"a11cb486a38ad177f7d8115b5457752f694349e242844d97a38cd1b22ecbd000","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-278000"],"size":"30"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-278000 image ls --format json --alsologtostderr:
I1216 11:46:09.705543    2421 out.go:345] Setting OutFile to fd 1 ...
I1216 11:46:09.705728    2421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:09.705736    2421 out.go:358] Setting ErrFile to fd 2...
I1216 11:46:09.705738    2421 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:09.705864    2421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:46:09.706335    2421 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:09.706401    2421 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:09.707333    2421 ssh_runner.go:195] Run: systemctl --version
I1216 11:46:09.707345    2421 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/functional-278000/id_rsa Username:docker}
I1216 11:46:09.728774    2421 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-278000 image ls --format yaml --alsologtostderr:
- id: c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "67900000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "142000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "97100000"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "56900000"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-278000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: a11cb486a38ad177f7d8115b5457752f694349e242844d97a38cd1b22ecbd000
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-278000
size: "30"
- id: 2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "93900000"
- id: a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "87200000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-278000 image ls --format yaml --alsologtostderr:
I1216 11:46:07.676875    2412 out.go:345] Setting OutFile to fd 1 ...
I1216 11:46:07.677095    2412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.677098    2412 out.go:358] Setting ErrFile to fd 2...
I1216 11:46:07.677101    2412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.677226    2412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:46:07.677658    2412 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.677732    2412 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.678541    2412 ssh_runner.go:195] Run: systemctl --version
I1216 11:46:07.678549    2412 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/functional-278000/id_rsa Username:docker}
I1216 11:46:07.701053    2412 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh pgrep buildkitd: exit status 1 (63.037125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image build -t localhost/my-image:functional-278000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-278000 image build -t localhost/my-image:functional-278000 testdata/build --alsologtostderr: (1.811573583s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-278000 image build -t localhost/my-image:functional-278000 testdata/build --alsologtostderr:
I1216 11:46:07.819418    2416 out.go:345] Setting OutFile to fd 1 ...
I1216 11:46:07.819702    2416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.819710    2416 out.go:358] Setting ErrFile to fd 2...
I1216 11:46:07.819712    2416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:46:07.819853    2416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20091-990/.minikube/bin
I1216 11:46:07.820305    2416 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.821204    2416 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 11:46:07.822106    2416 ssh_runner.go:195] Run: systemctl --version
I1216 11:46:07.822118    2416 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20091-990/.minikube/machines/functional-278000/id_rsa Username:docker}
I1216 11:46:07.844177    2416 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1525968294.tar
I1216 11:46:07.844250    2416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 11:46:07.848122    2416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1525968294.tar
I1216 11:46:07.849871    2416 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1525968294.tar: stat -c "%s %y" /var/lib/minikube/build/build.1525968294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1525968294.tar': No such file or directory
I1216 11:46:07.849886    2416 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1525968294.tar --> /var/lib/minikube/build/build.1525968294.tar (3072 bytes)
I1216 11:46:07.860117    2416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1525968294
I1216 11:46:07.868023    2416 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1525968294 -xf /var/lib/minikube/build/build.1525968294.tar
I1216 11:46:07.871421    2416 docker.go:360] Building image: /var/lib/minikube/build/build.1525968294
I1216 11:46:07.871483    2416 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-278000 /var/lib/minikube/build/build.1525968294
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:223422e44b9273ba9ab9639310673c51ab33afa5eb8fb5db3e892408d7164003 done
#8 naming to localhost/my-image:functional-278000 done
#8 DONE 0.0s
I1216 11:46:09.563901    2416 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-278000 /var/lib/minikube/build/build.1525968294: (1.692421209s)
I1216 11:46:09.563978    2416 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1525968294
I1216 11:46:09.567778    2416 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1525968294.tar
I1216 11:46:09.571026    2416 build_images.go:217] Built localhost/my-image:functional-278000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1525968294.tar
I1216 11:46:09.571041    2416 build_images.go:133] succeeded building to: functional-278000
I1216 11:46:09.571044    2416 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.780576792s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-278000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-278000 docker-env) && out/minikube-darwin-arm64 status -p functional-278000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-278000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2166: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-278000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8343d98f-1bda-4a1d-88a7-3b01ac384faf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8343d98f-1bda-4a1d-88a7-3b01ac384faf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.008595958s
I1216 11:45:27.678005    1494 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image load --daemon kicbase/echo-server:functional-278000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image load --daemon kicbase/echo-server:functional-278000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-278000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image load --daemon kicbase/echo-server:functional-278000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image save kicbase/echo-server:functional-278000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image rm kicbase/echo-server:functional-278000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-278000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 image save --daemon kicbase/echo-server:functional-278000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-278000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-278000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.123.247 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1216 11:45:27.766465    1494 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1216 11:45:27.812650    1494 config.go:182] Loaded profile config "functional-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.32.0
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-278000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-278000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-278000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-s44pb" [4b6e456e-74d1-47ba-bf25-663e0625ab5d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-s44pb" [4b6e456e-74d1-47ba-bf25-663e0625ab5d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009031917s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service list -o json
functional_test.go:1494: Took "292.512958ms" to run "out/minikube-darwin-arm64 -p functional-278000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32220
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32220
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "101.834833ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.63ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "100.643958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.603709ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734378353412355000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734378353412355000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734378353412355000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001/test-1734378353412355000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.109625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:45:53.471979    1494 retry.go:31] will retry after 427.783446ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.685459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:45:53.973686    1494 retry.go:31] will retry after 941.156956ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 test-1734378353412355000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh cat /mount-9p/test-1734378353412355000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-278000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [33400b94-a07a-4886-961e-f9b2b22f38c4] Pending
helpers_test.go:344: "busybox-mount" [33400b94-a07a-4886-961e-f9b2b22f38c4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [33400b94-a07a-4886-961e-f9b2b22f38c4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [33400b94-a07a-4886-961e-f9b2b22f38c4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006186792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-278000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2288691701/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1772062129/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.623917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:46:00.644244    1494 retry.go:31] will retry after 359.956338ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1772062129/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-278000 ssh "sudo umount -f /mount-9p": exit status 1 (63.148208ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-278000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1772062129/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T" /mount1: (1.332098292s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-278000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-278000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-278000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1600517340/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-278000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-278000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-278000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-922000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-505000 --driver=qemu2 
E1216 12:16:31.412185    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/addons-066000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-505000 --driver=qemu2 : (34.17844675s)
--- PASS: TestImageBuild/serial/Setup (34.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-505000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-505000: (1.386121625s)
--- PASS: TestImageBuild/serial/NormalBuild (1.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-505000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-505000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-505000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-617000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-617000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.524375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90e69ddc-0179-4b37-b7f6-876ede116ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"815f5532-5be1-4042-b7ba-12a82a978d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"b421aa12-d870-4ccb-af4c-59ad0a1578bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig"}}
	{"specversion":"1.0","id":"1387f550-8e14-4744-80c0-5ec347dd163c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b91b471a-6db5-46b9-969e-1f64d40496bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14268803-b391-4572-9366-7c96949cb945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube"}}
	{"specversion":"1.0","id":"f834634a-f42e-4d8b-9199-e87d7eb6a546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f0a1d4cf-42a2-40d0-ab2e-e749e3631143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-617000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (75.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-912000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-912000 --driver=qemu2 : (34.950567042s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-913000 --driver=qemu2 
E1216 12:25:18.628165    1494 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20091-990/.minikube/profiles/functional-278000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-913000 --driver=qemu2 : (39.989384541s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-912000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-913000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-913000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-913000
helpers_test.go:175: Cleaning up "first-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-912000
--- PASS: TestMinikubeProfile (75.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-861000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.73075ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-861000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20091-990/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20091-990/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-861000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-861000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.607958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-861000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-861000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.645990166s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.742982875s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-861000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-861000: (3.54151975s)
--- PASS: TestNoKubernetes/serial/Stop (3.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-861000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-861000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.578125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-861000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-861000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-349000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-221000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-221000 --alsologtostderr -v=3: (3.498162166s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (41.137958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-221000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-456000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-456000 --alsologtostderr -v=3: (1.785675375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-456000 -n no-preload-456000: exit status 7 (61.762125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-456000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-355000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-355000 --alsologtostderr -v=3: (1.868434708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-355000 -n embed-certs-355000: exit status 7 (62.35775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-355000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-304000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-304000 --alsologtostderr -v=3: (1.882048458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-304000 -n default-k8s-diff-port-304000: exit status 7 (63.161875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-304000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-225000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-225000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-225000 --alsologtostderr -v=3: (3.038842333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-225000 -n newest-cni-225000: exit status 7 (62.928875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-225000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-838000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                
----------------------- debugLogs end: cilium-838000 [took: 2.463099292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-838000
--- SKIP: TestNetworkPlugins/group/cilium (2.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-255000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard