Test Report: QEMU_macOS 19780

                    
                      d63f64bffc284d34b6c2581e44dece8bfcca0b7a:2024-10-09:36574
                    
                

Test fail (99/257)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.1
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.01
27 TestAddons/Setup 84.87
28 TestCertOptions 10.12
29 TestCertExpiration 197.05
30 TestDockerFlags 12.63
31 TestForceSystemdFlag 12.67
32 TestForceSystemdEnv 10.1
77 TestFunctional/parallel/ServiceCmdConnect 27.64
142 TestMultiControlPlane/serial/StartCluster 725.39
143 TestMultiControlPlane/serial/DeployApp 117.79
144 TestMultiControlPlane/serial/PingHostFromPods 0.1
145 TestMultiControlPlane/serial/AddWorkerNode 0.09
146 TestMultiControlPlane/serial/NodeLabels 0.06
147 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
149 TestMultiControlPlane/serial/StopSecondaryNode 0.12
150 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
151 TestMultiControlPlane/serial/RestartSecondaryNode 0.15
152 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
153 TestMultiControlPlane/serial/RestartClusterKeepsNodes 956.04
164 TestJSONOutput/start/Command 725.26
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.09
176 TestJSONOutput/unpause/Command 0.06
196 TestMountStart/serial/StartWithMountFirst 10.18
199 TestMultiNode/serial/FreshStart2Nodes 9.97
200 TestMultiNode/serial/DeployApp2Nodes 80.25
201 TestMultiNode/serial/PingHostFrom2Pods 0.1
202 TestMultiNode/serial/AddNode 0.08
203 TestMultiNode/serial/MultiNodeLabels 0.07
204 TestMultiNode/serial/ProfileList 0.08
205 TestMultiNode/serial/CopyFile 0.07
206 TestMultiNode/serial/StopNode 0.15
207 TestMultiNode/serial/StartAfterStop 51.03
208 TestMultiNode/serial/RestartKeepsNodes 8.76
209 TestMultiNode/serial/DeleteNode 0.11
210 TestMultiNode/serial/StopMultiNode 2.13
211 TestMultiNode/serial/RestartMultiNode 5.27
212 TestMultiNode/serial/ValidateNameConflict 20.11
216 TestPreload 10.08
218 TestScheduledStopUnix 10.16
219 TestSkaffold 12.91
222 TestRunningBinaryUpgrade 626.16
224 TestKubernetesUpgrade 17.28
238 TestStoppedBinaryUpgrade/Upgrade 600.8
248 TestPause/serial/Start 9.93
251 TestNoKubernetes/serial/StartWithK8s 11.63
252 TestNoKubernetes/serial/StartWithStopK8s 7.54
253 TestNoKubernetes/serial/Start 7.69
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.81
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.25
259 TestNoKubernetes/serial/StartNoArgs 5.37
261 TestNetworkPlugins/group/auto/Start 9.87
262 TestNetworkPlugins/group/kindnet/Start 9.92
263 TestNetworkPlugins/group/flannel/Start 9.9
264 TestNetworkPlugins/group/enable-default-cni/Start 10.09
265 TestNetworkPlugins/group/bridge/Start 10.06
266 TestNetworkPlugins/group/kubenet/Start 9.93
267 TestNetworkPlugins/group/custom-flannel/Start 9.96
268 TestNetworkPlugins/group/calico/Start 9.95
269 TestNetworkPlugins/group/false/Start 9.89
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.91
272 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
276 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
277 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
278 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
279 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
280 TestStartStop/group/old-k8s-version/serial/Pause 0.11
282 TestStartStop/group/no-preload/serial/FirstStart 10.06
283 TestStartStop/group/no-preload/serial/DeployApp 0.1
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
287 TestStartStop/group/no-preload/serial/SecondStart 5.26
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
291 TestStartStop/group/no-preload/serial/Pause 0.11
293 TestStartStop/group/embed-certs/serial/FirstStart 9.9
294 TestStartStop/group/embed-certs/serial/DeployApp 0.1
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
298 TestStartStop/group/embed-certs/serial/SecondStart 7
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
304 TestStartStop/group/embed-certs/serial/Pause 0.12
306 TestStartStop/group/newest-cni/serial/FirstStart 10.07
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
316 TestStartStop/group/newest-cni/serial/SecondStart 5.27
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (30.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-185000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-185000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (30.093684084s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a92a786-b870-4439-91b2-fe02053156c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-185000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2e5baf4-8ea8-4c37-a854-79b030e0c621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"da904534-fd12-4c06-a2c8-3013fd25a309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig"}}
	{"specversion":"1.0","id":"796a1bbb-7233-4426-8998-9745da020b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4a52890a-c2e0-4925-a9b8-4174df1b26f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eaf3caa2-84f1-41c3-92a6-3b89135cd20e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube"}}
	{"specversion":"1.0","id":"c5b98d93-b699-4daa-83c2-e5e63502b6f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"328436ae-a7be-4330-8aa7-e146bc72c073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd639835-10b1-453d-a087-d43e79970d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"40afca57-74e2-45b2-b05d-f514f84f5a57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae4b5d65-9b16-4e23-852c-02ea5e3b914f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-185000\" primary control-plane node in \"download-only-185000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e5ea405-a5de-44c8-ad1f-171e593c0905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d97232a9-5df9-4a72-b822-3b482061302c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0] Decompressors:map[bz2:0x140003f9830 gz:0x140003f9838 tar:0x140003f9790 tar.bz2:0x140003f97a0 tar.gz:0x140003f97e0 tar.xz:0x140003f97f0 tar.zst:0x140003f9820 tbz2:0x140003f97a0 tgz:0x14
0003f97e0 txz:0x140003f97f0 tzst:0x140003f9820 xz:0x140003f9840 zip:0x140003f9850 zst:0x140003f9848] Getters:map[file:0x14000a18680 http:0x140008ba0a0 https:0x140008ba190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"2ba83465-d607-48ea-a7ab-05a3704a4e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 11:45:34.333131    1687 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:45:34.333300    1687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:45:34.333304    1687 out.go:358] Setting ErrFile to fd 2...
	I1009 11:45:34.333306    1687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:45:34.333445    1687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	W1009 11:45:34.333534    1687 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19780-1164/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19780-1164/.minikube/config/config.json: no such file or directory
	I1009 11:45:34.334957    1687 out.go:352] Setting JSON to true
	I1009 11:45:34.353626    1687 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":904,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:45:34.353698    1687 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:45:34.358820    1687 out.go:97] [download-only-185000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:45:34.358941    1687 notify.go:220] Checking for updates...
	W1009 11:45:34.358980    1687 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 11:45:34.361823    1687 out.go:169] MINIKUBE_LOCATION=19780
	I1009 11:45:34.369785    1687 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:45:34.375801    1687 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:45:34.378826    1687 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:45:34.379972    1687 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	W1009 11:45:34.386898    1687 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 11:45:34.387102    1687 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:45:34.390838    1687 out.go:97] Using the qemu2 driver based on user configuration
	I1009 11:45:34.390860    1687 start.go:297] selected driver: qemu2
	I1009 11:45:34.390890    1687 start.go:901] validating driver "qemu2" against <nil>
	I1009 11:45:34.390978    1687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 11:45:34.394867    1687 out.go:169] Automatically selected the socket_vmnet network
	I1009 11:45:34.399100    1687 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1009 11:45:34.399233    1687 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 11:45:34.399269    1687 cni.go:84] Creating CNI manager for ""
	I1009 11:45:34.399300    1687 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 11:45:34.399348    1687 start.go:340] cluster config:
	{Name:download-only-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:45:34.404041    1687 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:45:34.407796    1687 out.go:97] Downloading VM boot image ...
	I1009 11:45:34.407813    1687 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1009 11:45:48.744440    1687 out.go:97] Starting "download-only-185000" primary control-plane node in "download-only-185000" cluster
	I1009 11:45:48.744460    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:45:48.802717    1687 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 11:45:48.802736    1687 cache.go:56] Caching tarball of preloaded images
	I1009 11:45:48.802954    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:45:48.808167    1687 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 11:45:48.808174    1687 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:45:48.888437    1687 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 11:46:02.906116    1687 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:46:02.906283    1687 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:46:03.601492    1687 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1009 11:46:03.601688    1687 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/download-only-185000/config.json ...
	I1009 11:46:03.601704    1687 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/download-only-185000/config.json: {Name:mkd0352330e63ea9488a0405dc95f822cf67234d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:46:03.601979    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:46:03.602205    1687 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1009 11:46:04.346361    1687 out.go:193] 
	W1009 11:46:04.351405    1687 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0] Decompressors:map[bz2:0x140003f9830 gz:0x140003f9838 tar:0x140003f9790 tar.bz2:0x140003f97a0 tar.gz:0x140003f97e0 tar.xz:0x140003f97f0 tar.zst:0x140003f9820 tbz2:0x140003f97a0 tgz:0x140003f97e0 txz:0x140003f97f0 tzst:0x140003f9820 xz:0x140003f9840 zip:0x140003f9850 zst:0x140003f9848] Getters:map[file:0x14000a18680 http:0x140008ba0a0 https:0x140008ba190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1009 11:46:04.351432    1687 out_reason.go:110] 
	W1009 11:46:04.358327    1687 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 11:46:04.361360    1687 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-185000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (30.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-935000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-935000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.838519792s)

                                                
                                                
-- stdout --
	* [offline-docker-935000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-935000" primary control-plane node in "offline-docker-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:40:34.109877    3844 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:40:34.110014    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:34.110018    3844 out.go:358] Setting ErrFile to fd 2...
	I1009 12:40:34.110021    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:34.110176    3844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:40:34.111417    3844 out.go:352] Setting JSON to false
	I1009 12:40:34.130367    3844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4204,"bootTime":1728498630,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:40:34.130447    3844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:40:34.136005    3844 out.go:177] * [offline-docker-935000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:40:34.142865    3844 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:40:34.142907    3844 notify.go:220] Checking for updates...
	I1009 12:40:34.150878    3844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:40:34.153980    3844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:40:34.156822    3844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:40:34.159804    3844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:40:34.166855    3844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:40:34.171232    3844 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:40:34.171284    3844 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:40:34.174785    3844 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:40:34.181876    3844 start.go:297] selected driver: qemu2
	I1009 12:40:34.181892    3844 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:40:34.181903    3844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:40:34.184168    3844 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:40:34.188807    3844 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:40:34.191920    3844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:40:34.191937    3844 cni.go:84] Creating CNI manager for ""
	I1009 12:40:34.191962    3844 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:40:34.191965    3844 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:40:34.192014    3844 start.go:340] cluster config:
	{Name:offline-docker-935000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:40:34.196729    3844 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:34.200874    3844 out.go:177] * Starting "offline-docker-935000" primary control-plane node in "offline-docker-935000" cluster
	I1009 12:40:34.212819    3844 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:40:34.212868    3844 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:40:34.212878    3844 cache.go:56] Caching tarball of preloaded images
	I1009 12:40:34.212976    3844 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:40:34.212982    3844 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:40:34.213045    3844 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/offline-docker-935000/config.json ...
	I1009 12:40:34.213056    3844 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/offline-docker-935000/config.json: {Name:mk56db15713882f64743bdeb632156bfdbda1160 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:40:34.213352    3844 start.go:360] acquireMachinesLock for offline-docker-935000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:34.213398    3844 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "offline-docker-935000"
	I1009 12:40:34.213407    3844 start.go:93] Provisioning new machine with config: &{Name:offline-docker-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:34.213431    3844 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:34.217861    3844 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:40:34.233282    3844 start.go:159] libmachine.API.Create for "offline-docker-935000" (driver="qemu2")
	I1009 12:40:34.233317    3844 client.go:168] LocalClient.Create starting
	I1009 12:40:34.233390    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:34.233432    3844 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:34.233444    3844 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:34.233488    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:34.233517    3844 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:34.233525    3844 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:34.233905    3844 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:34.384773    3844 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:34.447747    3844 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:34.447763    3844 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:34.447961    3844 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:34.458479    3844 main.go:141] libmachine: STDOUT: 
	I1009 12:40:34.458500    3844 main.go:141] libmachine: STDERR: 
	I1009 12:40:34.458569    3844 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2 +20000M
	I1009 12:40:34.468002    3844 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:34.468039    3844 main.go:141] libmachine: STDERR: 
	I1009 12:40:34.468060    3844 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:34.468067    3844 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:34.468078    3844 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:34.468118    3844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e1:e9:62:ee:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:34.469975    3844 main.go:141] libmachine: STDOUT: 
	I1009 12:40:34.469991    3844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:34.470012    3844 client.go:171] duration metric: took 236.696125ms to LocalClient.Create
	I1009 12:40:36.471982    3844 start.go:128] duration metric: took 2.258609542s to createHost
	I1009 12:40:36.471995    3844 start.go:83] releasing machines lock for "offline-docker-935000", held for 2.258657917s
	W1009 12:40:36.472010    3844 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:36.475132    3844 out.go:177] * Deleting "offline-docker-935000" in qemu2 ...
	W1009 12:40:36.486573    3844 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:36.486580    3844 start.go:729] Will try again in 5 seconds ...
	I1009 12:40:41.488608    3844 start.go:360] acquireMachinesLock for offline-docker-935000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:41.489211    3844 start.go:364] duration metric: took 471.708µs to acquireMachinesLock for "offline-docker-935000"
	I1009 12:40:41.489360    3844 start.go:93] Provisioning new machine with config: &{Name:offline-docker-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:41.489716    3844 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:41.503405    3844 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:40:41.551903    3844 start.go:159] libmachine.API.Create for "offline-docker-935000" (driver="qemu2")
	I1009 12:40:41.551964    3844 client.go:168] LocalClient.Create starting
	I1009 12:40:41.552152    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:41.552246    3844 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:41.552262    3844 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:41.552331    3844 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:41.552392    3844 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:41.552408    3844 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:41.552970    3844 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:41.719683    3844 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:41.847781    3844 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:41.847787    3844 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:41.847999    3844 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:41.858074    3844 main.go:141] libmachine: STDOUT: 
	I1009 12:40:41.858093    3844 main.go:141] libmachine: STDERR: 
	I1009 12:40:41.858144    3844 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2 +20000M
	I1009 12:40:41.866657    3844 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:41.866681    3844 main.go:141] libmachine: STDERR: 
	I1009 12:40:41.866694    3844 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:41.866702    3844 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:41.866709    3844 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:41.866740    3844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:28:48:08:ed:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/offline-docker-935000/disk.qcow2
	I1009 12:40:41.868503    3844 main.go:141] libmachine: STDOUT: 
	I1009 12:40:41.868516    3844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:41.868529    3844 client.go:171] duration metric: took 316.563709ms to LocalClient.Create
	I1009 12:40:43.870707    3844 start.go:128] duration metric: took 2.38099475s to createHost
	I1009 12:40:43.870793    3844 start.go:83] releasing machines lock for "offline-docker-935000", held for 2.38161575s
	W1009 12:40:43.871240    3844 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:43.882939    3844 out.go:201] 
	W1009 12:40:43.885871    3844 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:40:43.885894    3844 out.go:270] * 
	* 
	W1009 12:40:43.888804    3844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:40:43.899895    3844 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-935000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-09 12:40:43.916264 -0700 PDT m=+3309.780246043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-935000 -n offline-docker-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-935000 -n offline-docker-935000: exit status 7 (71.775666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-935000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (84.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-953000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-953000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 90 (1m24.858135333s)

                                                
                                                
-- stdout --
	* [addons-953000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-953000" primary control-plane node in "addons-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 11:46:19.061653    1765 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:46:19.061810    1765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:46:19.061814    1765 out.go:358] Setting ErrFile to fd 2...
	I1009 11:46:19.061816    1765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:46:19.061938    1765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:46:19.063136    1765 out.go:352] Setting JSON to false
	I1009 11:46:19.080758    1765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":949,"bootTime":1728498630,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:46:19.080820    1765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:46:19.087590    1765 out.go:177] * [addons-953000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:46:19.094824    1765 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 11:46:19.094853    1765 notify.go:220] Checking for updates...
	I1009 11:46:19.101707    1765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:46:19.104697    1765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:46:19.107649    1765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:46:19.110709    1765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 11:46:19.113715    1765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 11:46:19.116911    1765 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:46:19.120723    1765 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 11:46:19.127713    1765 start.go:297] selected driver: qemu2
	I1009 11:46:19.127722    1765 start.go:901] validating driver "qemu2" against <nil>
	I1009 11:46:19.127730    1765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 11:46:19.130293    1765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 11:46:19.133662    1765 out.go:177] * Automatically selected the socket_vmnet network
	I1009 11:46:19.136831    1765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 11:46:19.136849    1765 cni.go:84] Creating CNI manager for ""
	I1009 11:46:19.136869    1765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 11:46:19.136874    1765 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 11:46:19.136899    1765 start.go:340] cluster config:
	{Name:addons-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:46:19.141558    1765 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:46:19.148696    1765 out.go:177] * Starting "addons-953000" primary control-plane node in "addons-953000" cluster
	I1009 11:46:19.152726    1765 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:46:19.152741    1765 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 11:46:19.152748    1765 cache.go:56] Caching tarball of preloaded images
	I1009 11:46:19.152826    1765 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 11:46:19.152831    1765 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 11:46:19.153026    1765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/addons-953000/config.json ...
	I1009 11:46:19.153037    1765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/addons-953000/config.json: {Name:mk2faf6376c8b3ecc65738f754eb7a8bb061043d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:46:19.153370    1765 start.go:360] acquireMachinesLock for addons-953000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 11:46:19.153463    1765 start.go:364] duration metric: took 86.791µs to acquireMachinesLock for "addons-953000"
	I1009 11:46:19.153473    1765 start.go:93] Provisioning new machine with config: &{Name:addons-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 11:46:19.153506    1765 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 11:46:19.160740    1765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1009 11:46:19.394851    1765 start.go:159] libmachine.API.Create for "addons-953000" (driver="qemu2")
	I1009 11:46:19.394884    1765 client.go:168] LocalClient.Create starting
	I1009 11:46:19.395080    1765 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 11:46:19.435331    1765 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 11:46:19.636553    1765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 11:46:20.401201    1765 main.go:141] libmachine: Creating SSH key...
	I1009 11:46:20.506700    1765 main.go:141] libmachine: Creating Disk image...
	I1009 11:46:20.506706    1765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 11:46:20.506972    1765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2
	I1009 11:46:20.526932    1765 main.go:141] libmachine: STDOUT: 
	I1009 11:46:20.526980    1765 main.go:141] libmachine: STDERR: 
	I1009 11:46:20.527051    1765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2 +20000M
	I1009 11:46:20.535772    1765 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 11:46:20.535788    1765 main.go:141] libmachine: STDERR: 
	I1009 11:46:20.535804    1765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2
	I1009 11:46:20.535809    1765 main.go:141] libmachine: Starting QEMU VM...
	I1009 11:46:20.535849    1765 qemu.go:418] Using hvf for hardware acceleration
	I1009 11:46:20.535882    1765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:15:02:02:89:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/disk.qcow2
	I1009 11:46:20.595299    1765 main.go:141] libmachine: STDOUT: 
	I1009 11:46:20.595344    1765 main.go:141] libmachine: STDERR: 
	I1009 11:46:20.595348    1765 main.go:141] libmachine: Attempt 0
	I1009 11:46:20.595361    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:20.595455    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:20.595473    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:22.597649    1765 main.go:141] libmachine: Attempt 1
	I1009 11:46:22.597734    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:22.598076    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:22.598130    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:24.600370    1765 main.go:141] libmachine: Attempt 2
	I1009 11:46:24.600583    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:24.600932    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:24.600991    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:26.603147    1765 main.go:141] libmachine: Attempt 3
	I1009 11:46:26.603179    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:26.603263    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:26.603281    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:28.605303    1765 main.go:141] libmachine: Attempt 4
	I1009 11:46:28.605315    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:28.605356    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:28.605364    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:30.607397    1765 main.go:141] libmachine: Attempt 5
	I1009 11:46:30.607404    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:30.607445    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:30.607451    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:32.609481    1765 main.go:141] libmachine: Attempt 6
	I1009 11:46:32.609501    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:32.609594    1765 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1009 11:46:32.609604    1765 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:46:34.611635    1765 main.go:141] libmachine: Attempt 7
	I1009 11:46:34.611681    1765 main.go:141] libmachine: Searching for fa:15:2:2:89:77 in /var/db/dhcpd_leases ...
	I1009 11:46:34.611811    1765 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I1009 11:46:34.611824    1765 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:46:34.611828    1765 main.go:141] libmachine: Found match: fa:15:2:2:89:77
	I1009 11:46:34.611838    1765 main.go:141] libmachine: IP: 192.168.105.2
	I1009 11:46:34.611843    1765 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1009 11:46:36.630763    1765 machine.go:93] provisionDockerMachine start ...
	I1009 11:46:36.632122    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:36.632608    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:36.632622    1765 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 11:46:36.662553    1765 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1009 11:46:39.750031    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 11:46:39.750088    1765 buildroot.go:166] provisioning hostname "addons-953000"
	I1009 11:46:39.750250    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:39.750496    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:39.750507    1765 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-953000 && echo "addons-953000" | sudo tee /etc/hostname
	I1009 11:46:39.817060    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-953000
	
	I1009 11:46:39.817156    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:39.817325    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:39.817337    1765 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-953000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-953000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-953000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 11:46:39.873075    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 11:46:39.873085    1765 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 11:46:39.873093    1765 buildroot.go:174] setting up certificates
	I1009 11:46:39.873098    1765 provision.go:84] configureAuth start
	I1009 11:46:39.873106    1765 provision.go:143] copyHostCerts
	I1009 11:46:39.873202    1765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 11:46:39.873766    1765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 11:46:39.873938    1765 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 11:46:39.874062    1765 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.addons-953000 san=[127.0.0.1 192.168.105.2 addons-953000 localhost minikube]
	I1009 11:46:40.302373    1765 provision.go:177] copyRemoteCerts
	I1009 11:46:40.302501    1765 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 11:46:40.302526    1765 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/id_rsa Username:docker}
	I1009 11:46:40.331645    1765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 11:46:40.340221    1765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 11:46:40.348609    1765 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 11:46:40.358730    1765 provision.go:87] duration metric: took 485.624333ms to configureAuth
	I1009 11:46:40.358743    1765 buildroot.go:189] setting minikube options for container-runtime
	I1009 11:46:40.358862    1765 config.go:182] Loaded profile config "addons-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:46:40.358915    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:40.359012    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:40.359017    1765 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 11:46:40.409847    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 11:46:40.409858    1765 buildroot.go:70] root file system type: tmpfs
	I1009 11:46:40.409909    1765 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 11:46:40.409994    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:40.410101    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:40.410136    1765 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 11:46:40.466691    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 11:46:40.466757    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:40.466873    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:40.466881    1765 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 11:46:41.826796    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 11:46:41.826808    1765 machine.go:96] duration metric: took 5.196086s to provisionDockerMachine
	I1009 11:46:41.826816    1765 client.go:171] duration metric: took 22.432210208s to LocalClient.Create
	I1009 11:46:41.826829    1765 start.go:167] duration metric: took 22.432268584s to libmachine.API.Create "addons-953000"
	I1009 11:46:41.826834    1765 start.go:293] postStartSetup for "addons-953000" (driver="qemu2")
	I1009 11:46:41.826841    1765 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 11:46:41.826910    1765 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 11:46:41.826929    1765 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/id_rsa Username:docker}
	I1009 11:46:41.855077    1765 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 11:46:41.856473    1765 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 11:46:41.856479    1765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 11:46:41.856560    1765 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 11:46:41.856598    1765 start.go:296] duration metric: took 29.761209ms for postStartSetup
	I1009 11:46:41.856997    1765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/addons-953000/config.json ...
	I1009 11:46:41.857207    1765 start.go:128] duration metric: took 22.70398425s to createHost
	I1009 11:46:41.857246    1765 main.go:141] libmachine: Using SSH client type: native
	I1009 11:46:41.857337    1765 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10507a480] 0x10507ccc0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1009 11:46:41.857341    1765 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 11:46:41.905149    1765 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728499602.330220796
	
	I1009 11:46:41.905159    1765 fix.go:216] guest clock: 1728499602.330220796
	I1009 11:46:41.905163    1765 fix.go:229] Guest: 2024-10-09 11:46:42.330220796 -0700 PDT Remote: 2024-10-09 11:46:41.857214 -0700 PDT m=+22.816642751 (delta=473.006796ms)
	I1009 11:46:41.905174    1765 fix.go:200] guest clock delta is within tolerance: 473.006796ms
	I1009 11:46:41.905177    1765 start.go:83] releasing machines lock for "addons-953000", held for 22.751996541s
	I1009 11:46:41.905486    1765 ssh_runner.go:195] Run: cat /version.json
	I1009 11:46:41.905496    1765 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/id_rsa Username:docker}
	I1009 11:46:41.905487    1765 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 11:46:41.905542    1765 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/addons-953000/id_rsa Username:docker}
	I1009 11:46:42.154736    1765 ssh_runner.go:195] Run: systemctl --version
	I1009 11:46:42.165154    1765 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 11:46:42.171432    1765 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 11:46:42.171652    1765 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 11:46:42.192475    1765 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 11:46:42.192499    1765 start.go:495] detecting cgroup driver to use...
	I1009 11:46:42.192749    1765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 11:46:42.206889    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1009 11:46:42.213811    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 11:46:42.220186    1765 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 11:46:42.220247    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 11:46:42.225609    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 11:46:42.230841    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 11:46:42.235881    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 11:46:42.240222    1765 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 11:46:42.244449    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 11:46:42.248648    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 11:46:42.252497    1765 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 11:46:42.256323    1765 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 11:46:42.259981    1765 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 11:46:42.260008    1765 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 11:46:42.264629    1765 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 11:46:42.268266    1765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:46:42.357568    1765 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 11:46:42.365730    1765 start.go:495] detecting cgroup driver to use...
	I1009 11:46:42.365826    1765 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 11:46:42.372134    1765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 11:46:42.386168    1765 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 11:46:42.403274    1765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 11:46:42.409617    1765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 11:46:42.415989    1765 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 11:46:42.458252    1765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 11:46:42.463597    1765 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 11:46:42.470528    1765 ssh_runner.go:195] Run: which cri-dockerd
	I1009 11:46:42.471958    1765 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 11:46:42.475473    1765 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1009 11:46:42.481472    1765 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 11:46:42.566777    1765 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 11:46:42.648735    1765 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 11:46:42.648803    1765 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 11:46:42.655315    1765 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:46:42.739364    1765 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 11:47:43.824724    1765 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.086084125s)
	I1009 11:47:43.825052    1765 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1009 11:47:43.859471    1765 out.go:201] 
	W1009 11:47:43.863518    1765 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 09 18:46:41 addons-953000 systemd[1]: Starting Docker Application Container Engine...
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.112795129Z" level=info msg="Starting up"
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.113117629Z" level=info msg="containerd not running, starting managed containerd"
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.113542962Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=551
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.128892879Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137754129Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137768379Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137789420Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137795795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137821462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137827379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137895504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137905629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137911087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137916004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137939254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138043295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138739045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138772712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138838129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138848545Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138878254Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138895420Z" level=info msg="metadata content store policy set" policy=shared
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141854962Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141879587Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141889254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141896920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141905545Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141941795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142238379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142303045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142325795Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142344087Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142363254Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142382212Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142401337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142420337Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142439420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142457962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142475212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142492795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142516170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142555712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142707087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142723087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142731670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142739754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142745629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142755045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142766879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142776587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142783712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142791295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142798670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142808087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142824045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142841670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142847629Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142907837Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142925337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142932962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142940670Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142945587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142953587Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142986170Z" level=info msg="NRI interface is disabled by configuration."
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143145920Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143170337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143182879Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143196920Z" level=info msg="containerd successfully booted in 0.014655s"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.151983754Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.161870087Z" level=info msg="Loading containers: start."
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.203903879Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.235415171Z" level=info msg="Loading containers: done."
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239040421Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239049837Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239061837Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239092587Z" level=info msg="Daemon has completed initialization"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.251324462Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 09 18:46:42 addons-953000 systemd[1]: Started Docker Application Container Engine.
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.252281587Z" level=info msg="API listen on [::]:2376"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170285005Z" level=info msg="Processing signal 'terminated'"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170860671Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170970005Z" level=info msg="Daemon shutdown complete"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170984088Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.171000046Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 09 18:46:43 addons-953000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 09 18:46:44 addons-953000 systemd[1]: docker.service: Deactivated successfully.
	Oct 09 18:46:44 addons-953000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 09 18:46:44 addons-953000 systemd[1]: Starting Docker Application Container Engine...
	Oct 09 18:46:44 addons-953000 dockerd[897]: time="2024-10-09T18:46:44.230123630Z" level=info msg="Starting up"
	Oct 09 18:47:43 addons-953000 dockerd[897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 09 18:47:43 addons-953000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 09 18:47:43 addons-953000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 09 18:47:43 addons-953000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 09 18:46:41 addons-953000 systemd[1]: Starting Docker Application Container Engine...
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.112795129Z" level=info msg="Starting up"
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.113117629Z" level=info msg="containerd not running, starting managed containerd"
	Oct 09 18:46:41 addons-953000 dockerd[545]: time="2024-10-09T18:46:41.113542962Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=551
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.128892879Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137754129Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137768379Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137789420Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137795795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137821462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137827379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137895504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137905629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137911087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137916004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.137939254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138043295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138739045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138772712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138838129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138848545Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138878254Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.138895420Z" level=info msg="metadata content store policy set" policy=shared
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141854962Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141879587Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141889254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141896920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141905545Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.141941795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142238379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142303045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142325795Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142344087Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142363254Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142382212Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142401337Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142420337Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142439420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142457962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142475212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142492795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142516170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142555712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142707087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142723087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142731670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142739754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142745629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142755045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142766879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142776587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142783712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142791295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142798670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142808087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142824045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142841670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142847629Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142907837Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142925337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142932962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142940670Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142945587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142953587Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.142986170Z" level=info msg="NRI interface is disabled by configuration."
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143145920Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143170337Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143182879Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 09 18:46:41 addons-953000 dockerd[551]: time="2024-10-09T18:46:41.143196920Z" level=info msg="containerd successfully booted in 0.014655s"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.151983754Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.161870087Z" level=info msg="Loading containers: start."
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.203903879Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.235415171Z" level=info msg="Loading containers: done."
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239040421Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239049837Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239061837Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.239092587Z" level=info msg="Daemon has completed initialization"
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.251324462Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 09 18:46:42 addons-953000 systemd[1]: Started Docker Application Container Engine.
	Oct 09 18:46:42 addons-953000 dockerd[545]: time="2024-10-09T18:46:42.252281587Z" level=info msg="API listen on [::]:2376"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170285005Z" level=info msg="Processing signal 'terminated'"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170860671Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170970005Z" level=info msg="Daemon shutdown complete"
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.170984088Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 09 18:46:43 addons-953000 dockerd[545]: time="2024-10-09T18:46:43.171000046Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 09 18:46:43 addons-953000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 09 18:46:44 addons-953000 systemd[1]: docker.service: Deactivated successfully.
	Oct 09 18:46:44 addons-953000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 09 18:46:44 addons-953000 systemd[1]: Starting Docker Application Container Engine...
	Oct 09 18:46:44 addons-953000 dockerd[897]: time="2024-10-09T18:46:44.230123630Z" level=info msg="Starting up"
	Oct 09 18:47:43 addons-953000 dockerd[897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 09 18:47:43 addons-953000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 09 18:47:43 addons-953000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 09 18:47:43 addons-953000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1009 11:47:43.863571    1765 out.go:270] * 
	* 
	W1009 11:47:43.864618    1765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 11:47:43.876382    1765 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-953000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 90
--- FAIL: TestAddons/Setup (84.87s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-137000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-137000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.772530458s)

                                                
                                                
-- stdout --
	* [cert-options-137000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-137000" primary control-plane node in "cert-options-137000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-137000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-137000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-137000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.112292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-137000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-137000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-137000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-137000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-137000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-137000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.037792ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-137000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-137000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-137000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-137000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-137000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-09 12:52:27.803141 -0700 PDT m=+4013.705480626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-137000 -n cert-options-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-137000 -n cert-options-137000: exit status 7 (34.148042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-137000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (197.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E1009 12:51:56.024734    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.657956875s)

                                                
                                                
-- stdout --
	* [cert-expiration-620000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-620000" primary control-plane node in "cert-expiration-620000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-620000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.236095959s)

                                                
                                                
-- stdout --
	* [cert-expiration-620000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-620000" primary control-plane node in "cert-expiration-620000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-620000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-620000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-620000" primary control-plane node in "cert-expiration-620000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-09 12:55:12.821695 -0700 PDT m=+4178.729520710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-620000 -n cert-expiration-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-620000 -n cert-expiration-620000: exit status 7 (72.489834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-620000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-620000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-620000
--- FAIL: TestCertExpiration (197.05s)

                                                
                                    
x
+
TestDockerFlags (12.63s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-242000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-242000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.380810208s)

                                                
                                                
-- stdout --
	* [docker-flags-242000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-242000" primary control-plane node in "docker-flags-242000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-242000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:52:05.209149    4386 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:52:05.209372    4386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:05.209379    4386 out.go:358] Setting ErrFile to fd 2...
	I1009 12:52:05.209381    4386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:05.209535    4386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:52:05.211266    4386 out.go:352] Setting JSON to false
	I1009 12:52:05.233597    4386 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4895,"bootTime":1728498630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:52:05.233703    4386 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:52:05.246161    4386 out.go:177] * [docker-flags-242000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:52:05.254256    4386 notify.go:220] Checking for updates...
	I1009 12:52:05.259149    4386 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:52:05.267490    4386 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:52:05.275008    4386 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:52:05.283122    4386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:52:05.290130    4386 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:52:05.297027    4386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:52:05.302567    4386 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:05.302665    4386 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:05.302720    4386 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:52:05.307151    4386 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:52:05.315123    4386 start.go:297] selected driver: qemu2
	I1009 12:52:05.315130    4386 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:52:05.315136    4386 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:52:05.318242    4386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:52:05.324225    4386 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:52:05.328162    4386 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1009 12:52:05.328189    4386 cni.go:84] Creating CNI manager for ""
	I1009 12:52:05.328219    4386 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:52:05.328225    4386 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:52:05.328273    4386 start.go:340] cluster config:
	{Name:docker-flags-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:52:05.334622    4386 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:52:05.343102    4386 out.go:177] * Starting "docker-flags-242000" primary control-plane node in "docker-flags-242000" cluster
	I1009 12:52:05.346112    4386 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:52:05.346134    4386 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:52:05.346151    4386 cache.go:56] Caching tarball of preloaded images
	I1009 12:52:05.346293    4386 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:52:05.346300    4386 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:52:05.346397    4386 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/docker-flags-242000/config.json ...
	I1009 12:52:05.346410    4386 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/docker-flags-242000/config.json: {Name:mkc594ebc34bed981fb432ec456354b8c5f9baa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:52:05.346715    4386 start.go:360] acquireMachinesLock for docker-flags-242000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:07.550864    4386 start.go:364] duration metric: took 2.204111208s to acquireMachinesLock for "docker-flags-242000"
	I1009 12:52:07.551044    4386 start.go:93] Provisioning new machine with config: &{Name:docker-flags-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:07.551328    4386 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:07.559503    4386 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:52:07.608983    4386 start.go:159] libmachine.API.Create for "docker-flags-242000" (driver="qemu2")
	I1009 12:52:07.609039    4386 client.go:168] LocalClient.Create starting
	I1009 12:52:07.609249    4386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:07.609320    4386 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:07.609344    4386 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:07.609408    4386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:07.609466    4386 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:07.609483    4386 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:07.610209    4386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:07.775882    4386 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:08.124856    4386 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:08.124866    4386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:08.125121    4386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:08.136517    4386 main.go:141] libmachine: STDOUT: 
	I1009 12:52:08.136553    4386 main.go:141] libmachine: STDERR: 
	I1009 12:52:08.136619    4386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2 +20000M
	I1009 12:52:08.145333    4386 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:08.145360    4386 main.go:141] libmachine: STDERR: 
	I1009 12:52:08.145377    4386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:08.145384    4386 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:08.145393    4386 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:08.145427    4386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:08:0e:d1:96:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:08.147317    4386 main.go:141] libmachine: STDOUT: 
	I1009 12:52:08.147334    4386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:08.147354    4386 client.go:171] duration metric: took 538.3275ms to LocalClient.Create
	I1009 12:52:10.149473    4386 start.go:128] duration metric: took 2.598199334s to createHost
	I1009 12:52:10.149535    4386 start.go:83] releasing machines lock for "docker-flags-242000", held for 2.59870775s
	W1009 12:52:10.149595    4386 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:10.159929    4386 out.go:177] * Deleting "docker-flags-242000" in qemu2 ...
	W1009 12:52:10.187039    4386 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:10.187060    4386 start.go:729] Will try again in 5 seconds ...
	I1009 12:52:15.189036    4386 start.go:360] acquireMachinesLock for docker-flags-242000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:15.189496    4386 start.go:364] duration metric: took 391.959µs to acquireMachinesLock for "docker-flags-242000"
	I1009 12:52:15.189622    4386 start.go:93] Provisioning new machine with config: &{Name:docker-flags-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:15.189909    4386 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:15.205597    4386 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:52:15.254246    4386 start.go:159] libmachine.API.Create for "docker-flags-242000" (driver="qemu2")
	I1009 12:52:15.254283    4386 client.go:168] LocalClient.Create starting
	I1009 12:52:15.254447    4386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:15.254522    4386 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:15.254543    4386 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:15.254611    4386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:15.254667    4386 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:15.254685    4386 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:15.255575    4386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:15.423861    4386 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:15.481646    4386 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:15.481651    4386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:15.481857    4386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:15.491593    4386 main.go:141] libmachine: STDOUT: 
	I1009 12:52:15.491617    4386 main.go:141] libmachine: STDERR: 
	I1009 12:52:15.491687    4386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2 +20000M
	I1009 12:52:15.500062    4386 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:15.500076    4386 main.go:141] libmachine: STDERR: 
	I1009 12:52:15.500088    4386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:15.500093    4386 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:15.500102    4386 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:15.500132    4386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:b4:99:62:1a:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/docker-flags-242000/disk.qcow2
	I1009 12:52:15.501928    4386 main.go:141] libmachine: STDOUT: 
	I1009 12:52:15.501947    4386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:15.501963    4386 client.go:171] duration metric: took 247.682958ms to LocalClient.Create
	I1009 12:52:17.504072    4386 start.go:128] duration metric: took 2.314209916s to createHost
	I1009 12:52:17.504172    4386 start.go:83] releasing machines lock for "docker-flags-242000", held for 2.314688167s
	W1009 12:52:17.504523    4386 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:17.516988    4386 out.go:201] 
	W1009 12:52:17.521295    4386 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:52:17.521321    4386 out.go:270] * 
	* 
	W1009 12:52:17.523854    4386 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:52:17.535140    4386 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-242000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-242000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-242000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (87.221083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-242000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-242000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-242000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-242000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-242000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-242000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-242000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-242000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-242000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.763959ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-242000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-242000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-242000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-242000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-242000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-242000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-09 12:52:17.687021 -0700 PDT m=+4003.589023918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-242000 -n docker-flags-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-242000 -n docker-flags-242000: exit status 7 (33.801666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-242000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-242000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-242000
--- FAIL: TestDockerFlags (12.63s)

                                                
                                    
x
+
TestForceSystemdFlag (12.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-666000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-666000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.480090292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-666000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-666000" primary control-plane node in "force-systemd-flag-666000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-666000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:51:30.018856    4242 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:51:30.019025    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:51:30.019028    4242 out.go:358] Setting ErrFile to fd 2...
	I1009 12:51:30.019031    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:51:30.019180    4242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:51:30.020819    4242 out.go:352] Setting JSON to false
	I1009 12:51:30.042755    4242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4860,"bootTime":1728498630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:51:30.042856    4242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:51:30.048461    4242 out.go:177] * [force-systemd-flag-666000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:51:30.056537    4242 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:51:30.056639    4242 notify.go:220] Checking for updates...
	I1009 12:51:30.063436    4242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:51:30.066425    4242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:51:30.069396    4242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:51:30.072410    4242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:51:30.075460    4242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:51:30.078754    4242 config.go:182] Loaded profile config "NoKubernetes-206000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:51:30.078826    4242 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:51:30.078882    4242 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:51:30.083488    4242 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:51:30.098006    4242 start.go:297] selected driver: qemu2
	I1009 12:51:30.098012    4242 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:51:30.098017    4242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:51:30.100354    4242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:51:30.103424    4242 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:51:30.106544    4242 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 12:51:30.106557    4242 cni.go:84] Creating CNI manager for ""
	I1009 12:51:30.106575    4242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:51:30.106578    4242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:51:30.106605    4242 start.go:340] cluster config:
	{Name:force-systemd-flag-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:51:30.110742    4242 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:51:30.119519    4242 out.go:177] * Starting "force-systemd-flag-666000" primary control-plane node in "force-systemd-flag-666000" cluster
	I1009 12:51:30.123304    4242 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:51:30.123319    4242 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:51:30.123324    4242 cache.go:56] Caching tarball of preloaded images
	I1009 12:51:30.123391    4242 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:51:30.123396    4242 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:51:30.123441    4242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/force-systemd-flag-666000/config.json ...
	I1009 12:51:30.123451    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/force-systemd-flag-666000/config.json: {Name:mk0b5424d9b780b214e3c85a435c488454d71e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:51:30.123854    4242 start.go:360] acquireMachinesLock for force-systemd-flag-666000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:51:32.217603    4242 start.go:364] duration metric: took 2.093791541s to acquireMachinesLock for "force-systemd-flag-666000"
	I1009 12:51:32.217751    4242 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:51:32.218030    4242 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:51:32.227397    4242 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:51:32.277912    4242 start.go:159] libmachine.API.Create for "force-systemd-flag-666000" (driver="qemu2")
	I1009 12:51:32.277972    4242 client.go:168] LocalClient.Create starting
	I1009 12:51:32.278106    4242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:51:32.278178    4242 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:32.278200    4242 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:32.278270    4242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:51:32.278328    4242 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:32.278343    4242 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:32.279024    4242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:51:32.522663    4242 main.go:141] libmachine: Creating SSH key...
	I1009 12:51:32.673668    4242 main.go:141] libmachine: Creating Disk image...
	I1009 12:51:32.673675    4242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:51:32.673883    4242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:32.684223    4242 main.go:141] libmachine: STDOUT: 
	I1009 12:51:32.684240    4242 main.go:141] libmachine: STDERR: 
	I1009 12:51:32.684316    4242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2 +20000M
	I1009 12:51:32.692745    4242 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:51:32.692760    4242 main.go:141] libmachine: STDERR: 
	I1009 12:51:32.692778    4242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:32.692783    4242 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:51:32.692796    4242 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:51:32.692826    4242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dc:ec:c3:47:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:32.694606    4242 main.go:141] libmachine: STDOUT: 
	I1009 12:51:32.694627    4242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:51:32.694649    4242 client.go:171] duration metric: took 416.685334ms to LocalClient.Create
	I1009 12:51:34.696769    4242 start.go:128] duration metric: took 2.478760416s to createHost
	I1009 12:51:34.696868    4242 start.go:83] releasing machines lock for "force-systemd-flag-666000", held for 2.479304292s
	W1009 12:51:34.696938    4242 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:51:34.713145    4242 out.go:177] * Deleting "force-systemd-flag-666000" in qemu2 ...
	W1009 12:51:34.740127    4242 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:51:34.740156    4242 start.go:729] Will try again in 5 seconds ...
	I1009 12:51:39.742185    4242 start.go:360] acquireMachinesLock for force-systemd-flag-666000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:51:39.755343    4242 start.go:364] duration metric: took 13.076583ms to acquireMachinesLock for "force-systemd-flag-666000"
	I1009 12:51:39.755400    4242 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-666000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:51:39.755598    4242 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:51:39.768347    4242 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:51:39.816058    4242 start.go:159] libmachine.API.Create for "force-systemd-flag-666000" (driver="qemu2")
	I1009 12:51:39.816112    4242 client.go:168] LocalClient.Create starting
	I1009 12:51:39.816219    4242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:51:39.816301    4242 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:39.816325    4242 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:39.816377    4242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:51:39.816432    4242 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:39.816445    4242 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:39.816989    4242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:51:40.069540    4242 main.go:141] libmachine: Creating SSH key...
	I1009 12:51:40.389171    4242 main.go:141] libmachine: Creating Disk image...
	I1009 12:51:40.389182    4242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:51:40.389385    4242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:40.399625    4242 main.go:141] libmachine: STDOUT: 
	I1009 12:51:40.399646    4242 main.go:141] libmachine: STDERR: 
	I1009 12:51:40.399705    4242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2 +20000M
	I1009 12:51:40.408216    4242 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:51:40.408232    4242 main.go:141] libmachine: STDERR: 
	I1009 12:51:40.408244    4242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:40.408256    4242 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:51:40.408269    4242 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:51:40.408307    4242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ec:1d:c3:12:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-flag-666000/disk.qcow2
	I1009 12:51:40.410104    4242 main.go:141] libmachine: STDOUT: 
	I1009 12:51:40.410126    4242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:51:40.410149    4242 client.go:171] duration metric: took 594.052ms to LocalClient.Create
	I1009 12:51:42.412255    4242 start.go:128] duration metric: took 2.656718833s to createHost
	I1009 12:51:42.412321    4242 start.go:83] releasing machines lock for "force-systemd-flag-666000", held for 2.657036459s
	W1009 12:51:42.412708    4242 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:51:42.427305    4242 out.go:201] 
	W1009 12:51:42.432213    4242 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:51:42.432240    4242 out.go:270] * 
	* 
	W1009 12:51:42.434884    4242 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:51:42.443261    4242 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-666000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-666000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-666000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.013ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-666000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-666000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-666000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-09 12:51:42.540672 -0700 PDT m=+3968.441501918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-666000 -n force-systemd-flag-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-666000 -n force-systemd-flag-666000: exit status 7 (36.582792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-666000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-666000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-666000
--- FAIL: TestForceSystemdFlag (12.67s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-983000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1009 12:51:55.882906    1686 install.go:79] stdout: 
W1009 12:51:55.883064    1686 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1009 12:51:55.883089    1686 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit]
I1009 12:51:55.896502    1686 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit]
I1009 12:51:55.907836    1686 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit]
I1009 12:51:55.918445    1686 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-983000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.893375625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-983000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-983000" primary control-plane node in "force-systemd-env-983000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-983000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:51:55.105485    4345 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:51:55.105641    4345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:51:55.105645    4345 out.go:358] Setting ErrFile to fd 2...
	I1009 12:51:55.105647    4345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:51:55.105781    4345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:51:55.106927    4345 out.go:352] Setting JSON to false
	I1009 12:51:55.124491    4345 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4885,"bootTime":1728498630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:51:55.124562    4345 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:51:55.129913    4345 out.go:177] * [force-systemd-env-983000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:51:55.138116    4345 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:51:55.138170    4345 notify.go:220] Checking for updates...
	I1009 12:51:55.147055    4345 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:51:55.154006    4345 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:51:55.162009    4345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:51:55.170025    4345 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:51:55.178017    4345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1009 12:51:55.182462    4345 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:51:55.182510    4345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:51:55.186853    4345 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:51:55.194021    4345 start.go:297] selected driver: qemu2
	I1009 12:51:55.194028    4345 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:51:55.194034    4345 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:51:55.196667    4345 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:51:55.200837    4345 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:51:55.205076    4345 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 12:51:55.205092    4345 cni.go:84] Creating CNI manager for ""
	I1009 12:51:55.205123    4345 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:51:55.205128    4345 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:51:55.205158    4345 start.go:340] cluster config:
	{Name:force-systemd-env-983000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:51:55.209937    4345 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:51:55.218002    4345 out.go:177] * Starting "force-systemd-env-983000" primary control-plane node in "force-systemd-env-983000" cluster
	I1009 12:51:55.222010    4345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:51:55.222034    4345 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:51:55.222044    4345 cache.go:56] Caching tarball of preloaded images
	I1009 12:51:55.222130    4345 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:51:55.222137    4345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:51:55.222216    4345 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/force-systemd-env-983000/config.json ...
	I1009 12:51:55.222231    4345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/force-systemd-env-983000/config.json: {Name:mkbb27f288ccc67a4fee262fbb42d29f461845c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:51:55.222549    4345 start.go:360] acquireMachinesLock for force-systemd-env-983000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:51:55.222606    4345 start.go:364] duration metric: took 47.042µs to acquireMachinesLock for "force-systemd-env-983000"
	I1009 12:51:55.222617    4345 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-983000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:51:55.222649    4345 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:51:55.229969    4345 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:51:55.246975    4345 start.go:159] libmachine.API.Create for "force-systemd-env-983000" (driver="qemu2")
	I1009 12:51:55.247000    4345 client.go:168] LocalClient.Create starting
	I1009 12:51:55.247070    4345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:51:55.247107    4345 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:55.247118    4345 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:55.247155    4345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:51:55.247184    4345 main.go:141] libmachine: Decoding PEM data...
	I1009 12:51:55.247196    4345 main.go:141] libmachine: Parsing certificate...
	I1009 12:51:55.247543    4345 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:51:55.403359    4345 main.go:141] libmachine: Creating SSH key...
	I1009 12:51:55.559689    4345 main.go:141] libmachine: Creating Disk image...
	I1009 12:51:55.559700    4345 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:51:55.559915    4345 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:51:55.570113    4345 main.go:141] libmachine: STDOUT: 
	I1009 12:51:55.570130    4345 main.go:141] libmachine: STDERR: 
	I1009 12:51:55.570203    4345 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2 +20000M
	I1009 12:51:55.579108    4345 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:51:55.579124    4345 main.go:141] libmachine: STDERR: 
	I1009 12:51:55.579141    4345 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:51:55.579145    4345 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:51:55.579156    4345 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:51:55.579185    4345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:87:89:68:79:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:51:55.581130    4345 main.go:141] libmachine: STDOUT: 
	I1009 12:51:55.581146    4345 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:51:55.581168    4345 client.go:171] duration metric: took 334.17375ms to LocalClient.Create
	I1009 12:51:57.583317    4345 start.go:128] duration metric: took 2.360714s to createHost
	I1009 12:51:57.583431    4345 start.go:83] releasing machines lock for "force-systemd-env-983000", held for 2.360893125s
	W1009 12:51:57.583508    4345 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:51:57.601563    4345 out.go:177] * Deleting "force-systemd-env-983000" in qemu2 ...
	W1009 12:51:57.623413    4345 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:51:57.623436    4345 start.go:729] Will try again in 5 seconds ...
	I1009 12:52:02.625741    4345 start.go:360] acquireMachinesLock for force-systemd-env-983000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:02.626349    4345 start.go:364] duration metric: took 465.084µs to acquireMachinesLock for "force-systemd-env-983000"
	I1009 12:52:02.626509    4345 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-983000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:02.626763    4345 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:02.632356    4345 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 12:52:02.680873    4345 start.go:159] libmachine.API.Create for "force-systemd-env-983000" (driver="qemu2")
	I1009 12:52:02.680924    4345 client.go:168] LocalClient.Create starting
	I1009 12:52:02.681064    4345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:02.681136    4345 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:02.681150    4345 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:02.681207    4345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:02.681266    4345 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:02.681276    4345 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:02.681939    4345 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:02.848391    4345 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:02.902048    4345 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:02.902054    4345 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:02.902270    4345 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:52:02.912219    4345 main.go:141] libmachine: STDOUT: 
	I1009 12:52:02.912238    4345 main.go:141] libmachine: STDERR: 
	I1009 12:52:02.912295    4345 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2 +20000M
	I1009 12:52:02.920876    4345 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:02.920900    4345 main.go:141] libmachine: STDERR: 
	I1009 12:52:02.920911    4345 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:52:02.920916    4345 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:02.920926    4345 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:02.920954    4345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:6b:26:da:e8:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/force-systemd-env-983000/disk.qcow2
	I1009 12:52:02.922795    4345 main.go:141] libmachine: STDOUT: 
	I1009 12:52:02.922809    4345 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:02.922827    4345 client.go:171] duration metric: took 241.904875ms to LocalClient.Create
	I1009 12:52:04.924929    4345 start.go:128] duration metric: took 2.298207333s to createHost
	I1009 12:52:04.925024    4345 start.go:83] releasing machines lock for "force-systemd-env-983000", held for 2.298709375s
	W1009 12:52:04.925460    4345 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-983000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-983000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:04.935013    4345 out.go:201] 
	W1009 12:52:04.941060    4345 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:52:04.941087    4345 out.go:270] * 
	* 
	W1009 12:52:04.943869    4345 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:52:04.952993    4345 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-983000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-983000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-983000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.017083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-983000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-983000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-983000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-09 12:52:05.049191 -0700 PDT m=+3990.950773043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-983000 -n force-systemd-env-983000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-983000 -n force-systemd-env-983000: exit status 7 (37.541709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-983000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-983000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-983000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-517000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-517000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-88jjn" [132ecdf9-5e02-4865-a825-6a9ee4e20459] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-88jjn" [132ecdf9-5e02-4865-a825-6a9ee4e20459] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003939125s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31116
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:17.805387    1686 retry.go:31] will retry after 872.733177ms: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:18.680786    1686 retry.go:31] will retry after 1.483122445s: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:20.167707    1686 retry.go:31] will retry after 1.796406145s: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:21.967372    1686 retry.go:31] will retry after 4.265805066s: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:26.235872    1686 retry.go:31] will retry after 6.611761151s: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
I1009 11:52:32.850802    1686 retry.go:31] will retry after 4.360740237s: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31116: Get "http://192.168.105.4:31116": dial tcp 192.168.105.4:31116: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-517000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-88jjn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-517000/192.168.105.4
Start Time:       Wed, 09 Oct 2024 11:52:10 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://93962dd42a5009524726a8dd83795622ace99a226632873d250e32615a7f4e10
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 09 Oct 2024 11:52:28 -0700
Finished:     Wed, 09 Oct 2024 11:52:28 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 09 Oct 2024 11:52:11 -0700
Finished:     Wed, 09 Oct 2024 11:52:11 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5n8g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-p5n8g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  26s               default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-88jjn to functional-517000
Normal   Pulled     9s (x3 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    9s (x3 over 26s)  kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 26s)  kubelet            Started container echoserver-arm
Warning  BackOff    8s (x2 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-88jjn_default(132ecdf9-5e02-4865-a825-6a9ee4e20459)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-517000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-517000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.70.29
IPs:                      10.100.70.29
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31116/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-517000 -n functional-517000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-517000 image load --daemon                                                                           | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | kicbase/echo-server:functional-517000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-517000 image ls                                                                                      | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	| image   | functional-517000 image save                                                                                    | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | kicbase/echo-server:functional-517000                                                                           |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-517000 image rm                                                                                      | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | kicbase/echo-server:functional-517000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-517000 image ls                                                                                      | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	| image   | functional-517000 image load                                                                                    | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-517000 image ls                                                                                      | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	| image   | functional-517000 image save --daemon                                                                           | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | kicbase/echo-server:functional-517000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-517000 ssh echo                                                                                      | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-517000 ssh cat                                                                                       | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT | 09 Oct 24 11:51 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-517000 tunnel                                                                                        | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-517000 tunnel                                                                                        | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:51 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-517000 tunnel                                                                                        | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-517000 service list                                                                                  | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	| service | functional-517000 service list                                                                                  | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-517000 service                                                                                       | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-517000                                                                                               | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-517000 service                                                                                       | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-517000 addons list                                                                                   | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	| addons  | functional-517000 addons list                                                                                   | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-517000 service                                                                                       | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-517000 ssh findmnt                                                                                   | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-517000                                                                                            | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-517000 ssh -- ls                                                                                     | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-517000 ssh cat                                                                                       | functional-517000 | jenkins | v1.34.0 | 09 Oct 24 11:52 PDT | 09 Oct 24 11:52 PDT |
	|         | /mount-9p/test-1728499954959023000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 11:51:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 11:51:14.861639    1966 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:51:14.861796    1966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:51:14.861798    1966 out.go:358] Setting ErrFile to fd 2...
	I1009 11:51:14.861800    1966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:51:14.861944    1966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:51:14.863021    1966 out.go:352] Setting JSON to false
	I1009 11:51:14.880854    1966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1244,"bootTime":1728498630,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:51:14.880930    1966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:51:14.886236    1966 out.go:177] * [functional-517000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:51:14.895221    1966 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 11:51:14.895271    1966 notify.go:220] Checking for updates...
	I1009 11:51:14.902149    1966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:51:14.905210    1966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:51:14.908138    1966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:51:14.911171    1966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 11:51:14.914191    1966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 11:51:14.915831    1966 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:51:14.915882    1966 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:51:14.920143    1966 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 11:51:14.927025    1966 start.go:297] selected driver: qemu2
	I1009 11:51:14.927029    1966 start.go:901] validating driver "qemu2" against &{Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:51:14.927070    1966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 11:51:14.929556    1966 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 11:51:14.929574    1966 cni.go:84] Creating CNI manager for ""
	I1009 11:51:14.929608    1966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 11:51:14.929648    1966 start.go:340] cluster config:
	{Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-517000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:51:14.933962    1966 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:51:14.945218    1966 out.go:177] * Starting "functional-517000" primary control-plane node in "functional-517000" cluster
	I1009 11:51:14.949172    1966 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:51:14.949183    1966 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 11:51:14.949195    1966 cache.go:56] Caching tarball of preloaded images
	I1009 11:51:14.949262    1966 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 11:51:14.949266    1966 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 11:51:14.949314    1966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/config.json ...
	I1009 11:51:14.949840    1966 start.go:360] acquireMachinesLock for functional-517000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 11:51:14.949888    1966 start.go:364] duration metric: took 43.084µs to acquireMachinesLock for "functional-517000"
	I1009 11:51:14.949895    1966 start.go:96] Skipping create...Using existing machine configuration
	I1009 11:51:14.949897    1966 fix.go:54] fixHost starting: 
	I1009 11:51:14.950522    1966 fix.go:112] recreateIfNeeded on functional-517000: state=Running err=<nil>
	W1009 11:51:14.950529    1966 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 11:51:14.958115    1966 out.go:177] * Updating the running qemu2 "functional-517000" VM ...
	I1009 11:51:14.962172    1966 machine.go:93] provisionDockerMachine start ...
	I1009 11:51:14.962213    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:14.962339    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:14.962342    1966 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 11:51:15.005606    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-517000
	
	I1009 11:51:15.005620    1966 buildroot.go:166] provisioning hostname "functional-517000"
	I1009 11:51:15.005673    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.005777    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.005781    1966 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-517000 && echo "functional-517000" | sudo tee /etc/hostname
	I1009 11:51:15.051791    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-517000
	
	I1009 11:51:15.051844    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.051947    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.051953    1966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-517000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-517000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-517000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 11:51:15.090797    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 11:51:15.090804    1966 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 11:51:15.090810    1966 buildroot.go:174] setting up certificates
	I1009 11:51:15.090814    1966 provision.go:84] configureAuth start
	I1009 11:51:15.090821    1966 provision.go:143] copyHostCerts
	I1009 11:51:15.090889    1966 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem, removing ...
	I1009 11:51:15.090893    1966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem
	I1009 11:51:15.091146    1966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 11:51:15.091340    1966 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem, removing ...
	I1009 11:51:15.091343    1966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem
	I1009 11:51:15.091404    1966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 11:51:15.091526    1966 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem, removing ...
	I1009 11:51:15.091528    1966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem
	I1009 11:51:15.091582    1966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 11:51:15.091662    1966 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.functional-517000 san=[127.0.0.1 192.168.105.4 functional-517000 localhost minikube]
	I1009 11:51:15.264353    1966 provision.go:177] copyRemoteCerts
	I1009 11:51:15.264426    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 11:51:15.264434    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:15.289197    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 11:51:15.298324    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 11:51:15.306810    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 11:51:15.317817    1966 provision.go:87] duration metric: took 226.995167ms to configureAuth
	I1009 11:51:15.317825    1966 buildroot.go:189] setting minikube options for container-runtime
	I1009 11:51:15.317940    1966 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:51:15.317988    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.318107    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.318111    1966 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 11:51:15.357728    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 11:51:15.357734    1966 buildroot.go:70] root file system type: tmpfs
	I1009 11:51:15.357783    1966 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 11:51:15.357844    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.357958    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.357988    1966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 11:51:15.403602    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 11:51:15.403659    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.403783    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.403790    1966 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 11:51:15.445474    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 11:51:15.445480    1966 machine.go:96] duration metric: took 483.306459ms to provisionDockerMachine
	I1009 11:51:15.445484    1966 start.go:293] postStartSetup for "functional-517000" (driver="qemu2")
	I1009 11:51:15.445489    1966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 11:51:15.445539    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 11:51:15.445545    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:15.466930    1966 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 11:51:15.468501    1966 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 11:51:15.468506    1966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 11:51:15.468590    1966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 11:51:15.468732    1966 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem -> 16862.pem in /etc/ssl/certs
	I1009 11:51:15.468882    1966 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/test/nested/copy/1686/hosts -> hosts in /etc/test/nested/copy/1686
	I1009 11:51:15.468926    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1686
	I1009 11:51:15.472820    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /etc/ssl/certs/16862.pem (1708 bytes)
	I1009 11:51:15.481672    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/test/nested/copy/1686/hosts --> /etc/test/nested/copy/1686/hosts (40 bytes)
	I1009 11:51:15.490097    1966 start.go:296] duration metric: took 44.608708ms for postStartSetup
	I1009 11:51:15.490107    1966 fix.go:56] duration metric: took 540.21125ms for fixHost
	I1009 11:51:15.490148    1966 main.go:141] libmachine: Using SSH client type: native
	I1009 11:51:15.490241    1966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006fa480] 0x1006fccc0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1009 11:51:15.490244    1966 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 11:51:15.530018    1966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728499875.562135089
	
	I1009 11:51:15.530024    1966 fix.go:216] guest clock: 1728499875.562135089
	I1009 11:51:15.530027    1966 fix.go:229] Guest: 2024-10-09 11:51:15.562135089 -0700 PDT Remote: 2024-10-09 11:51:15.490109 -0700 PDT m=+0.649767585 (delta=72.026089ms)
	I1009 11:51:15.530037    1966 fix.go:200] guest clock delta is within tolerance: 72.026089ms
	I1009 11:51:15.530039    1966 start.go:83] releasing machines lock for "functional-517000", held for 580.149833ms
	I1009 11:51:15.530381    1966 ssh_runner.go:195] Run: cat /version.json
	I1009 11:51:15.530387    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:15.530407    1966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 11:51:15.530428    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:15.553077    1966 ssh_runner.go:195] Run: systemctl --version
	I1009 11:51:15.594053    1966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 11:51:15.596046    1966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 11:51:15.596084    1966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 11:51:15.599963    1966 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 11:51:15.599968    1966 start.go:495] detecting cgroup driver to use...
	I1009 11:51:15.600033    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 11:51:15.606573    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1009 11:51:15.610210    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 11:51:15.614164    1966 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 11:51:15.614191    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 11:51:15.617915    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 11:51:15.621896    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 11:51:15.625761    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 11:51:15.629587    1966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 11:51:15.633504    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 11:51:15.637660    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 11:51:15.641759    1966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 11:51:15.645692    1966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 11:51:15.649673    1966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 11:51:15.653669    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:15.756412    1966 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 11:51:15.767097    1966 start.go:495] detecting cgroup driver to use...
	I1009 11:51:15.767187    1966 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 11:51:15.773002    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 11:51:15.779495    1966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 11:51:15.789962    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 11:51:15.795698    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 11:51:15.800843    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 11:51:15.807613    1966 ssh_runner.go:195] Run: which cri-dockerd
	I1009 11:51:15.809013    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 11:51:15.812856    1966 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1009 11:51:15.819312    1966 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 11:51:15.911852    1966 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 11:51:16.004860    1966 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 11:51:16.004922    1966 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 11:51:16.011804    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:16.122915    1966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 11:51:28.507156    1966 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.384301458s)
	I1009 11:51:28.507240    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1009 11:51:28.512754    1966 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1009 11:51:28.520634    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 11:51:28.526547    1966 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 11:51:28.612120    1966 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 11:51:28.698533    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:28.791093    1966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 11:51:28.797972    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 11:51:28.803379    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:28.891004    1966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1009 11:51:28.919806    1966 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 11:51:28.919911    1966 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 11:51:28.922216    1966 start.go:563] Will wait 60s for crictl version
	I1009 11:51:28.922279    1966 ssh_runner.go:195] Run: which crictl
	I1009 11:51:28.924051    1966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 11:51:28.944862    1966 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1009 11:51:28.944941    1966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 11:51:28.952326    1966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 11:51:28.968387    1966 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1009 11:51:28.968544    1966 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1009 11:51:28.976257    1966 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 11:51:28.979266    1966 kubeadm.go:883] updating cluster {Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 11:51:28.979327    1966 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:51:28.979394    1966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 11:51:28.985373    1966 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-517000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1009 11:51:28.985377    1966 docker.go:615] Images already preloaded, skipping extraction
	I1009 11:51:28.985432    1966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 11:51:28.994675    1966 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-517000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1009 11:51:28.994680    1966 cache_images.go:84] Images are preloaded, skipping loading
	I1009 11:51:28.994690    1966 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I1009 11:51:28.994787    1966 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-517000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 11:51:28.994853    1966 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 11:51:29.010306    1966 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 11:51:29.010321    1966 cni.go:84] Creating CNI manager for ""
	I1009 11:51:29.010327    1966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 11:51:29.010331    1966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 11:51:29.010340    1966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-517000 NodeName:functional-517000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 11:51:29.010406    1966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-517000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 11:51:29.010477    1966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 11:51:29.014475    1966 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 11:51:29.014507    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 11:51:29.018207    1966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 11:51:29.024335    1966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 11:51:29.030286    1966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I1009 11:51:29.036515    1966 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1009 11:51:29.038075    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:29.128149    1966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 11:51:29.134435    1966 certs.go:68] Setting up /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000 for IP: 192.168.105.4
	I1009 11:51:29.134439    1966 certs.go:194] generating shared ca certs ...
	I1009 11:51:29.134447    1966 certs.go:226] acquiring lock for ca certs: {Name:mkbf858b3b2074a12d126c3a2fed20f98f420e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:51:29.134621    1966 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key
	I1009 11:51:29.134689    1966 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key
	I1009 11:51:29.134694    1966 certs.go:256] generating profile certs ...
	I1009 11:51:29.134757    1966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.key
	I1009 11:51:29.134818    1966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/apiserver.key.ade58ebf
	I1009 11:51:29.134877    1966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/proxy-client.key
	I1009 11:51:29.135047    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem (1338 bytes)
	W1009 11:51:29.135079    1966 certs.go:480] ignoring /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686_empty.pem, impossibly tiny 0 bytes
	I1009 11:51:29.135083    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 11:51:29.135100    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem (1078 bytes)
	I1009 11:51:29.135122    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem (1123 bytes)
	I1009 11:51:29.135141    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem (1679 bytes)
	I1009 11:51:29.135179    1966 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem (1708 bytes)
	I1009 11:51:29.135507    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 11:51:29.144146    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 11:51:29.152532    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 11:51:29.161223    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 11:51:29.169617    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 11:51:29.178368    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 11:51:29.186360    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 11:51:29.194899    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 11:51:29.202877    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /usr/share/ca-certificates/16862.pem (1708 bytes)
	I1009 11:51:29.210992    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 11:51:29.219281    1966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem --> /usr/share/ca-certificates/1686.pem (1338 bytes)
	I1009 11:51:29.227324    1966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 11:51:29.233334    1966 ssh_runner.go:195] Run: openssl version
	I1009 11:51:29.235381    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 11:51:29.239160    1966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 11:51:29.240976    1966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 11:51:29.241009    1966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 11:51:29.243048    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 11:51:29.246650    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1686.pem && ln -fs /usr/share/ca-certificates/1686.pem /etc/ssl/certs/1686.pem"
	I1009 11:51:29.250925    1966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1686.pem
	I1009 11:51:29.252496    1966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:49 /usr/share/ca-certificates/1686.pem
	I1009 11:51:29.252523    1966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1686.pem
	I1009 11:51:29.254537    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1686.pem /etc/ssl/certs/51391683.0"
	I1009 11:51:29.258377    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16862.pem && ln -fs /usr/share/ca-certificates/16862.pem /etc/ssl/certs/16862.pem"
	I1009 11:51:29.262073    1966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16862.pem
	I1009 11:51:29.263809    1966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:49 /usr/share/ca-certificates/16862.pem
	I1009 11:51:29.263832    1966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16862.pem
	I1009 11:51:29.265899    1966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16862.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 11:51:29.269710    1966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 11:51:29.271379    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 11:51:29.273448    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 11:51:29.275585    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 11:51:29.277566    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 11:51:29.279869    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 11:51:29.281851    1966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 11:51:29.283901    1966 kubeadm.go:392] StartCluster: {Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:51:29.283984    1966 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 11:51:29.296030    1966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 11:51:29.299596    1966 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 11:51:29.299600    1966 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 11:51:29.299631    1966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 11:51:29.302821    1966 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 11:51:29.303094    1966 kubeconfig.go:125] found "functional-517000" server: "https://192.168.105.4:8441"
	I1009 11:51:29.303769    1966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 11:51:29.306942    1966 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1009 11:51:29.306946    1966 kubeadm.go:1160] stopping kube-system containers ...
	I1009 11:51:29.306989    1966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 11:51:29.314187    1966 docker.go:483] Stopping containers: [dcaae86796df dbe381aa7d6b a190ed3834e8 5afa893214c4 01e7426defe4 ba6efbc178a8 b08e92efa185 b07d41e12b9c 038318400a7c 41dcefdb4ed2 4f41513d2449 b1e05be534ee 4000ae9084ab 86844d172193 8153b9c9da23 d026f504b83b 46c5ef79dcbb de2c75f7785f 3a9ac5a6df53 d8aa327d0528 115ea7f8e9a9 f76bbe1eae4f 752b13160370 13831e0bb808 0ff22bd87cc5 427ddead555b e53bf767705c ac2677f72458]
	I1009 11:51:29.314264    1966 ssh_runner.go:195] Run: docker stop dcaae86796df dbe381aa7d6b a190ed3834e8 5afa893214c4 01e7426defe4 ba6efbc178a8 b08e92efa185 b07d41e12b9c 038318400a7c 41dcefdb4ed2 4f41513d2449 b1e05be534ee 4000ae9084ab 86844d172193 8153b9c9da23 d026f504b83b 46c5ef79dcbb de2c75f7785f 3a9ac5a6df53 d8aa327d0528 115ea7f8e9a9 f76bbe1eae4f 752b13160370 13831e0bb808 0ff22bd87cc5 427ddead555b e53bf767705c ac2677f72458
	I1009 11:51:29.321816    1966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 11:51:29.436545    1966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 11:51:29.442916    1966 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct  9 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  9 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  9 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct  9 18:50 /etc/kubernetes/scheduler.conf
	
	I1009 11:51:29.442977    1966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 11:51:29.448284    1966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 11:51:29.452873    1966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 11:51:29.457523    1966 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 11:51:29.457553    1966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 11:51:29.462006    1966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 11:51:29.466038    1966 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 11:51:29.466067    1966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 11:51:29.469937    1966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 11:51:29.474024    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:29.491533    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:30.116468    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:30.240639    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:30.259781    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:30.286604    1966 api_server.go:52] waiting for apiserver process to appear ...
	I1009 11:51:30.286678    1966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 11:51:30.788783    1966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 11:51:31.288781    1966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 11:51:31.294261    1966 api_server.go:72] duration metric: took 1.007666292s to wait for apiserver process to appear ...
	I1009 11:51:31.294271    1966 api_server.go:88] waiting for apiserver healthz status ...
	I1009 11:51:31.294282    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:32.716538    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 11:51:32.716547    1966 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 11:51:32.716553    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:32.756299    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 11:51:32.756310    1966 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 11:51:32.796316    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:32.799324    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 11:51:32.799329    1966 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 11:51:33.296452    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:33.306744    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 11:51:33.306763    1966 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 11:51:33.796325    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:33.800735    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1009 11:51:33.805293    1966 api_server.go:141] control plane version: v1.31.1
	I1009 11:51:33.805302    1966 api_server.go:131] duration metric: took 2.511051125s to wait for apiserver health ...
	I1009 11:51:33.805307    1966 cni.go:84] Creating CNI manager for ""
	I1009 11:51:33.805312    1966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 11:51:33.809437    1966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 11:51:33.815562    1966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 11:51:33.820020    1966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 11:51:33.832372    1966 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 11:51:33.838394    1966 system_pods.go:59] 7 kube-system pods found
	I1009 11:51:33.838406    1966 system_pods.go:61] "coredns-7c65d6cfc9-6j6vh" [05c8a09a-ae06-4452-bf84-f712285c7254] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 11:51:33.838409    1966 system_pods.go:61] "etcd-functional-517000" [7dea046a-e827-439b-afc3-98887e760090] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 11:51:33.838411    1966 system_pods.go:61] "kube-apiserver-functional-517000" [c9bbc2f3-16d2-4403-b764-c5b995c5b19a] Pending
	I1009 11:51:33.838414    1966 system_pods.go:61] "kube-controller-manager-functional-517000" [4630e5ce-eda8-4368-bfec-8099aadbded6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 11:51:33.838416    1966 system_pods.go:61] "kube-proxy-62vrr" [6f50aac0-4d19-49cb-b19b-714f93ff18c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 11:51:33.838418    1966 system_pods.go:61] "kube-scheduler-functional-517000" [68fc1c48-f211-4c6e-b36e-48964349e8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 11:51:33.838419    1966 system_pods.go:61] "storage-provisioner" [ac845b3d-9cbd-47f6-b00b-1a147a94e1fc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 11:51:33.838422    1966 system_pods.go:74] duration metric: took 6.044583ms to wait for pod list to return data ...
	I1009 11:51:33.838425    1966 node_conditions.go:102] verifying NodePressure condition ...
	I1009 11:51:33.841799    1966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 11:51:33.841807    1966 node_conditions.go:123] node cpu capacity is 2
	I1009 11:51:33.841812    1966 node_conditions.go:105] duration metric: took 3.384666ms to run NodePressure ...
	I1009 11:51:33.841820    1966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 11:51:34.075744    1966 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 11:51:34.078199    1966 kubeadm.go:739] kubelet initialised
	I1009 11:51:34.078203    1966 kubeadm.go:740] duration metric: took 2.451667ms waiting for restarted kubelet to initialise ...
	I1009 11:51:34.078207    1966 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 11:51:34.080790    1966 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:36.085672    1966 pod_ready.go:103] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"False"
	I1009 11:51:38.095719    1966 pod_ready.go:103] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"False"
	I1009 11:51:40.594406    1966 pod_ready.go:103] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"False"
	I1009 11:51:42.595656    1966 pod_ready.go:103] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"False"
	I1009 11:51:43.094519    1966 pod_ready.go:93] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:43.094540    1966 pod_ready.go:82] duration metric: took 9.013831292s for pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:43.094557    1966 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.104822    1966 pod_ready.go:93] pod "etcd-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:45.104839    1966 pod_ready.go:82] duration metric: took 2.010294625s for pod "etcd-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.104849    1966 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.111388    1966 pod_ready.go:93] pod "kube-apiserver-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:45.111396    1966 pod_ready.go:82] duration metric: took 6.54025ms for pod "kube-apiserver-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.111405    1966 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.116594    1966 pod_ready.go:93] pod "kube-controller-manager-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:45.116601    1966 pod_ready.go:82] duration metric: took 5.190458ms for pod "kube-controller-manager-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.116608    1966 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-62vrr" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.122194    1966 pod_ready.go:93] pod "kube-proxy-62vrr" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:45.122200    1966 pod_ready.go:82] duration metric: took 5.588042ms for pod "kube-proxy-62vrr" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.122206    1966 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.637554    1966 pod_ready.go:93] pod "kube-scheduler-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:45.637580    1966 pod_ready.go:82] duration metric: took 515.368792ms for pod "kube-scheduler-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:45.637604    1966 pod_ready.go:39] duration metric: took 11.559504833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 11:51:45.637648    1966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 11:51:45.652239    1966 ops.go:34] apiserver oom_adj: -16
	I1009 11:51:45.652252    1966 kubeadm.go:597] duration metric: took 16.352804417s to restartPrimaryControlPlane
	I1009 11:51:45.652261    1966 kubeadm.go:394] duration metric: took 16.3685205s to StartCluster
	I1009 11:51:45.652291    1966 settings.go:142] acquiring lock: {Name:mk60ce4ac2055fafaa579c122d2ddfc9feae1fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:51:45.652574    1966 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:51:45.653621    1966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:51:45.654308    1966 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 11:51:45.654352    1966 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 11:51:45.654476    1966 addons.go:69] Setting storage-provisioner=true in profile "functional-517000"
	I1009 11:51:45.654523    1966 addons.go:234] Setting addon storage-provisioner=true in "functional-517000"
	W1009 11:51:45.654531    1966 addons.go:243] addon storage-provisioner should already be in state true
	I1009 11:51:45.654514    1966 addons.go:69] Setting default-storageclass=true in profile "functional-517000"
	I1009 11:51:45.654547    1966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-517000"
	I1009 11:51:45.654558    1966 host.go:66] Checking if "functional-517000" exists ...
	I1009 11:51:45.654597    1966 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:51:45.657202    1966 addons.go:234] Setting addon default-storageclass=true in "functional-517000"
	W1009 11:51:45.657211    1966 addons.go:243] addon default-storageclass should already be in state true
	I1009 11:51:45.657230    1966 host.go:66] Checking if "functional-517000" exists ...
	I1009 11:51:45.658312    1966 out.go:177] * Verifying Kubernetes components...
	I1009 11:51:45.663165    1966 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 11:51:45.663175    1966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 11:51:45.663187    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:45.665276    1966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 11:51:45.665383    1966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 11:51:45.669359    1966 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 11:51:45.669367    1966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 11:51:45.669377    1966 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
	I1009 11:51:45.793335    1966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 11:51:45.799183    1966 node_ready.go:35] waiting up to 6m0s for node "functional-517000" to be "Ready" ...
	I1009 11:51:45.800633    1966 node_ready.go:49] node "functional-517000" has status "Ready":"True"
	I1009 11:51:45.800641    1966 node_ready.go:38] duration metric: took 1.44675ms for node "functional-517000" to be "Ready" ...
	I1009 11:51:45.800643    1966 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 11:51:45.866335    1966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 11:51:45.877907    1966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 11:51:45.901562    1966 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:46.166197    1966 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1009 11:51:46.170148    1966 addons.go:510] duration metric: took 515.832167ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1009 11:51:46.300320    1966 pod_ready.go:93] pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:46.300325    1966 pod_ready.go:82] duration metric: took 398.759833ms for pod "coredns-7c65d6cfc9-6j6vh" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:46.300329    1966 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:46.707055    1966 pod_ready.go:93] pod "etcd-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:46.707083    1966 pod_ready.go:82] duration metric: took 406.745291ms for pod "etcd-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:46.707105    1966 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.105666    1966 pod_ready.go:93] pod "kube-apiserver-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:47.105695    1966 pod_ready.go:82] duration metric: took 398.574834ms for pod "kube-apiserver-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.105714    1966 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.506886    1966 pod_ready.go:93] pod "kube-controller-manager-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:47.506918    1966 pod_ready.go:82] duration metric: took 401.192791ms for pod "kube-controller-manager-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.506949    1966 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-62vrr" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.906144    1966 pod_ready.go:93] pod "kube-proxy-62vrr" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:47.906171    1966 pod_ready.go:82] duration metric: took 399.209875ms for pod "kube-proxy-62vrr" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:47.906199    1966 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:48.302137    1966 pod_ready.go:93] pod "kube-scheduler-functional-517000" in "kube-system" namespace has status "Ready":"True"
	I1009 11:51:48.302151    1966 pod_ready.go:82] duration metric: took 395.946583ms for pod "kube-scheduler-functional-517000" in "kube-system" namespace to be "Ready" ...
	I1009 11:51:48.302165    1966 pod_ready.go:39] duration metric: took 2.5015425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 11:51:48.302190    1966 api_server.go:52] waiting for apiserver process to appear ...
	I1009 11:51:48.302406    1966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 11:51:48.318404    1966 api_server.go:72] duration metric: took 2.664100291s to wait for apiserver process to appear ...
	I1009 11:51:48.318419    1966 api_server.go:88] waiting for apiserver healthz status ...
	I1009 11:51:48.318435    1966 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1009 11:51:48.324853    1966 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1009 11:51:48.325611    1966 api_server.go:141] control plane version: v1.31.1
	I1009 11:51:48.325617    1966 api_server.go:131] duration metric: took 7.194667ms to wait for apiserver health ...
	I1009 11:51:48.325622    1966 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 11:51:48.514974    1966 system_pods.go:59] 7 kube-system pods found
	I1009 11:51:48.515012    1966 system_pods.go:61] "coredns-7c65d6cfc9-6j6vh" [05c8a09a-ae06-4452-bf84-f712285c7254] Running
	I1009 11:51:48.515023    1966 system_pods.go:61] "etcd-functional-517000" [7dea046a-e827-439b-afc3-98887e760090] Running
	I1009 11:51:48.515030    1966 system_pods.go:61] "kube-apiserver-functional-517000" [c9bbc2f3-16d2-4403-b764-c5b995c5b19a] Running
	I1009 11:51:48.515035    1966 system_pods.go:61] "kube-controller-manager-functional-517000" [4630e5ce-eda8-4368-bfec-8099aadbded6] Running
	I1009 11:51:48.515039    1966 system_pods.go:61] "kube-proxy-62vrr" [6f50aac0-4d19-49cb-b19b-714f93ff18c4] Running
	I1009 11:51:48.515046    1966 system_pods.go:61] "kube-scheduler-functional-517000" [68fc1c48-f211-4c6e-b36e-48964349e8fd] Running
	I1009 11:51:48.515051    1966 system_pods.go:61] "storage-provisioner" [ac845b3d-9cbd-47f6-b00b-1a147a94e1fc] Running
	I1009 11:51:48.515061    1966 system_pods.go:74] duration metric: took 189.435959ms to wait for pod list to return data ...
	I1009 11:51:48.515074    1966 default_sa.go:34] waiting for default service account to be created ...
	I1009 11:51:48.705512    1966 default_sa.go:45] found service account: "default"
	I1009 11:51:48.705544    1966 default_sa.go:55] duration metric: took 190.458333ms for default service account to be created ...
	I1009 11:51:48.705566    1966 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 11:51:48.910298    1966 system_pods.go:86] 7 kube-system pods found
	I1009 11:51:48.910325    1966 system_pods.go:89] "coredns-7c65d6cfc9-6j6vh" [05c8a09a-ae06-4452-bf84-f712285c7254] Running
	I1009 11:51:48.910334    1966 system_pods.go:89] "etcd-functional-517000" [7dea046a-e827-439b-afc3-98887e760090] Running
	I1009 11:51:48.910341    1966 system_pods.go:89] "kube-apiserver-functional-517000" [c9bbc2f3-16d2-4403-b764-c5b995c5b19a] Running
	I1009 11:51:48.910346    1966 system_pods.go:89] "kube-controller-manager-functional-517000" [4630e5ce-eda8-4368-bfec-8099aadbded6] Running
	I1009 11:51:48.910352    1966 system_pods.go:89] "kube-proxy-62vrr" [6f50aac0-4d19-49cb-b19b-714f93ff18c4] Running
	I1009 11:51:48.910356    1966 system_pods.go:89] "kube-scheduler-functional-517000" [68fc1c48-f211-4c6e-b36e-48964349e8fd] Running
	I1009 11:51:48.910361    1966 system_pods.go:89] "storage-provisioner" [ac845b3d-9cbd-47f6-b00b-1a147a94e1fc] Running
	I1009 11:51:48.910372    1966 system_pods.go:126] duration metric: took 204.802292ms to wait for k8s-apps to be running ...
	I1009 11:51:48.910382    1966 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 11:51:48.910620    1966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 11:51:48.930616    1966 system_svc.go:56] duration metric: took 20.213167ms WaitForService to wait for kubelet
	I1009 11:51:48.930633    1966 kubeadm.go:582] duration metric: took 3.276336416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 11:51:48.930655    1966 node_conditions.go:102] verifying NodePressure condition ...
	I1009 11:51:49.105530    1966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 11:51:49.105559    1966 node_conditions.go:123] node cpu capacity is 2
	I1009 11:51:49.105595    1966 node_conditions.go:105] duration metric: took 174.929458ms to run NodePressure ...
	I1009 11:51:49.105626    1966 start.go:241] waiting for startup goroutines ...
	I1009 11:51:49.105644    1966 start.go:246] waiting for cluster config update ...
	I1009 11:51:49.105668    1966 start.go:255] writing updated cluster config ...
	I1009 11:51:49.107113    1966 ssh_runner.go:195] Run: rm -f paused
	I1009 11:51:49.177337    1966 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I1009 11:51:49.181977    1966 out.go:177] * Done! kubectl is now configured to use "functional-517000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 09 18:52:26 functional-517000 dockerd[5795]: time="2024-10-09T18:52:26.270495183Z" level=warning msg="cleaning up after shim disconnected" id=568a1997e1e7afeecae3d71325b8d184e43727b45adcb987e40ea39491cb6011 namespace=moby
	Oct 09 18:52:26 functional-517000 dockerd[5795]: time="2024-10-09T18:52:26.270507812Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 09 18:52:27 functional-517000 dockerd[5795]: time="2024-10-09T18:52:27.885705893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 18:52:27 functional-517000 dockerd[5795]: time="2024-10-09T18:52:27.885803841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 18:52:27 functional-517000 dockerd[5795]: time="2024-10-09T18:52:27.885829683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:27 functional-517000 dockerd[5795]: time="2024-10-09T18:52:27.885923589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:27 functional-517000 cri-dockerd[6051]: time="2024-10-09T18:52:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad9c994675f9584f824764a5c5e5b04a697d8ba6edecc06a5ebfbe543c59b585/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.398492728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.398561084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.398578548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.398620269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:28 functional-517000 dockerd[5789]: time="2024-10-09T18:52:28.419118066Z" level=info msg="ignoring event" container=93962dd42a5009524726a8dd83795622ace99a226632873d250e32615a7f4e10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.419182128Z" level=info msg="shim disconnected" id=93962dd42a5009524726a8dd83795622ace99a226632873d250e32615a7f4e10 namespace=moby
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.419209054Z" level=warning msg="cleaning up after shim disconnected" id=93962dd42a5009524726a8dd83795622ace99a226632873d250e32615a7f4e10 namespace=moby
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.419213138Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 09 18:52:28 functional-517000 cri-dockerd[6051]: time="2024-10-09T18:52:28Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.710207715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.710412073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.710535946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:28 functional-517000 dockerd[5795]: time="2024-10-09T18:52:28.710601759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:37 functional-517000 dockerd[5795]: time="2024-10-09T18:52:37.067417201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 18:52:37 functional-517000 dockerd[5795]: time="2024-10-09T18:52:37.067667907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 18:52:37 functional-517000 dockerd[5795]: time="2024-10-09T18:52:37.067684538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:37 functional-517000 dockerd[5795]: time="2024-10-09T18:52:37.067784820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 18:52:37 functional-517000 cri-dockerd[6051]: time="2024-10-09T18:52:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df53843b0041186fee68a57f7cb060261bb44c8da8a963b14c6f4f51e153d633/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9cdfd063e91bf       nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0   9 seconds ago        Running             myfrontend                0                   ad9c994675f95       sp-pod
	93962dd42a500       72565bf5bbedf                                                                   9 seconds ago        Exited              echoserver-arm            2                   9f59a8bed3674       hello-node-connect-65d86f57f4-88jjn
	f3d812a18b35d       72565bf5bbedf                                                                   21 seconds ago       Exited              echoserver-arm            2                   bc6598b480243       hello-node-64b4f8f9ff-5rqvj
	781391d067f01       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250   33 seconds ago       Running             nginx                     0                   7cd22cf79cf6f       nginx-svc
	c27a4b3fe7834       2f6c962e7b831                                                                   About a minute ago   Running             coredns                   2                   7d319cb5ed043       coredns-7c65d6cfc9-6j6vh
	0e3ab9139e53f       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       3                   7565b3b425de7       storage-provisioner
	3afd220050b21       24a140c548c07                                                                   About a minute ago   Running             kube-proxy                2                   19668966b82cf       kube-proxy-62vrr
	8ea1f92e3e2d0       7f8aa378bb47d                                                                   About a minute ago   Running             kube-scheduler            2                   5731d379623f1       kube-scheduler-functional-517000
	53f95f20007e6       279f381cb3736                                                                   About a minute ago   Running             kube-controller-manager   2                   ceefda59fd399       kube-controller-manager-functional-517000
	2dccc5d4420b4       27e3830e14027                                                                   About a minute ago   Running             etcd                      2                   d3363dfabf0db       etcd-functional-517000
	65be091b3bee4       d3f53a98c0a9d                                                                   About a minute ago   Running             kube-apiserver            0                   76939de013ece       kube-apiserver-functional-517000
	dcaae86796dfe       ba04bb24b9575                                                                   About a minute ago   Exited              storage-provisioner       2                   b08e92efa1856       storage-provisioner
	dbe381aa7d6b0       2f6c962e7b831                                                                   2 minutes ago        Exited              coredns                   1                   01e7426defe46       coredns-7c65d6cfc9-6j6vh
	a190ed3834e8e       24a140c548c07                                                                   2 minutes ago        Exited              kube-proxy                1                   ba6efbc178a82       kube-proxy-62vrr
	b07d41e12b9c4       27e3830e14027                                                                   2 minutes ago        Exited              etcd                      1                   b1e05be534eef       etcd-functional-517000
	41dcefdb4ed21       7f8aa378bb47d                                                                   2 minutes ago        Exited              kube-scheduler            1                   8153b9c9da23e       kube-scheduler-functional-517000
	4f41513d24496       279f381cb3736                                                                   2 minutes ago        Exited              kube-controller-manager   1                   86844d1721936       kube-controller-manager-functional-517000
	
	
	==> coredns [c27a4b3fe783] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34446 - 30731 "HINFO IN 2650072419900657538.849304516600701137. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059671421s
	[INFO] 10.244.0.1:42681 - 7045 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00011083s
	[INFO] 10.244.0.1:48083 - 9139 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000161722s
	[INFO] 10.244.0.1:14086 - 36552 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00002555s
	[INFO] 10.244.0.1:13040 - 27304 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001819584s
	[INFO] 10.244.0.1:44204 - 8210 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000069941s
	[INFO] 10.244.0.1:59866 - 34279 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000311732s
	
	
	==> coredns [dbe381aa7d6b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49618 - 2934 "HINFO IN 6437480209382109792.5253849857295745775. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012101399s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[436258579]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 18:50:29.296) (total time: 30001ms):
	Trace[436258579]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:50:59.297)
	Trace[436258579]: [30.001124306s] [30.001124306s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[344287071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 18:50:29.296) (total time: 30001ms):
	Trace[344287071]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:50:59.297)
	Trace[344287071]: [30.001514781s] [30.001514781s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2055850872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 18:50:29.297) (total time: 30001ms):
	Trace[2055850872]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:50:59.298)
	Trace[2055850872]: [30.001247337s] [30.001247337s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-517000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-517000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=functional-517000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T11_49_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-517000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 18:52:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 18:52:33 +0000   Wed, 09 Oct 2024 18:49:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 18:52:33 +0000   Wed, 09 Oct 2024 18:49:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 18:52:33 +0000   Wed, 09 Oct 2024 18:49:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 18:52:33 +0000   Wed, 09 Oct 2024 18:50:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-517000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecefc873954f49feb75cbc20966cff64
	  System UUID:                ecefc873954f49feb75cbc20966cff64
	  Boot ID:                    d322641c-c347-4fe5-8d74-e2b9baad9736
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     hello-node-64b4f8f9ff-5rqvj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  default                     hello-node-connect-65d86f57f4-88jjn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7c65d6cfc9-6j6vh                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m34s
	  kube-system                 etcd-functional-517000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m40s
	  kube-system                 kube-apiserver-functional-517000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-functional-517000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-proxy-62vrr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-functional-517000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 63s                    kube-proxy       
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m40s (x2 over 2m40s)  kubelet          Node functional-517000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m40s (x2 over 2m40s)  kubelet          Node functional-517000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m40s (x2 over 2m40s)  kubelet          Node functional-517000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m36s                  kubelet          Node functional-517000 status is now: NodeReady
	  Normal  RegisteredNode           2m35s                  node-controller  Node functional-517000 event: Registered Node functional-517000 in Controller
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node functional-517000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node functional-517000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node functional-517000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-517000 event: Registered Node functional-517000 in Controller
	  Normal  Starting                 67s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node functional-517000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node functional-517000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)      kubelet          Node functional-517000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                    node-controller  Node functional-517000 event: Registered Node functional-517000 in Controller
	
	
	==> dmesg <==
	[  +3.405106] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.775678] kauditd_printk_skb: 34 callbacks suppressed
	[Oct 9 18:51] systemd-fstab-generator[4871]: Ignoring "noauto" option for root device
	[ +11.639316] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.056173] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.106328] systemd-fstab-generator[5345]: Ignoring "noauto" option for root device
	[  +0.093248] systemd-fstab-generator[5357]: Ignoring "noauto" option for root device
	[  +0.113474] systemd-fstab-generator[5371]: Ignoring "noauto" option for root device
	[  +5.135183] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.373111] systemd-fstab-generator[6004]: Ignoring "noauto" option for root device
	[  +0.088118] systemd-fstab-generator[6016]: Ignoring "noauto" option for root device
	[  +0.091961] systemd-fstab-generator[6028]: Ignoring "noauto" option for root device
	[  +0.099711] systemd-fstab-generator[6043]: Ignoring "noauto" option for root device
	[  +0.236351] systemd-fstab-generator[6209]: Ignoring "noauto" option for root device
	[  +1.104919] systemd-fstab-generator[6332]: Ignoring "noauto" option for root device
	[  +3.415097] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.168942] kauditd_printk_skb: 33 callbacks suppressed
	[  +2.953614] systemd-fstab-generator[7362]: Ignoring "noauto" option for root device
	[  +4.852907] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.598316] kauditd_printk_skb: 19 callbacks suppressed
	[Oct 9 18:52] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.026190] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.752392] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.702454] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.872257] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [2dccc5d4420b] <==
	{"level":"info","ts":"2024-10-09T18:51:31.133112Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-09T18:51:31.133152Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:51:31.133195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:51:31.134230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:51:31.134828Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T18:51:31.134910Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-09T18:51:31.134924Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-09T18:51:31.135799Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T18:51:31.135823Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T18:51:32.228197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-09T18:51:32.228356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-09T18:51:32.228432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-09T18:51:32.228475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-09T18:51:32.228514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-09T18:51:32.228546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-09T18:51:32.228586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-09T18:51:32.233748Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-517000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T18:51:32.233875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:51:32.234454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:51:32.235197Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T18:51:32.235345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T18:51:32.236443Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:51:32.236444Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:51:32.238109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-09T18:51:32.238789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b07d41e12b9c] <==
	{"level":"info","ts":"2024-10-09T18:50:27.646303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-09T18:50:27.646366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-09T18:50:27.646399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-09T18:50:27.646416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-09T18:50:27.646470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-09T18:50:27.646540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-09T18:50:27.651872Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-517000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T18:50:27.652145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:50:27.652422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T18:50:27.652479Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T18:50:27.652523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:50:27.654638Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:50:27.654638Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:50:27.657211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T18:50:27.658378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-09T18:51:16.168042Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-09T18:51:16.168072Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-517000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-09T18:51:16.168124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T18:51:16.168169Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T18:51:16.178430Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T18:51:16.178455Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-09T18:51:16.178478Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-09T18:51:16.182604Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-09T18:51:16.182655Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-09T18:51:16.182659Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-517000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:52:37 up 3 min,  0 users,  load average: 0.49, 0.35, 0.14
	Linux functional-517000 5.10.207 #1 SMP PREEMPT Tue Oct 8 12:02:09 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65be091b3bee] <==
	I1009 18:51:32.836564       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 18:51:32.836519       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1009 18:51:32.836524       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 18:51:32.836933       1 aggregator.go:171] initial CRD sync complete...
	I1009 18:51:32.836941       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 18:51:32.836943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 18:51:32.836946       1 cache.go:39] Caches are synced for autoregister controller
	I1009 18:51:32.839643       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1009 18:51:32.840780       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 18:51:32.876001       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 18:51:33.737492       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 18:51:33.928504       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 18:51:33.937371       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 18:51:33.954402       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 18:51:33.962758       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 18:51:33.964929       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 18:51:36.156363       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 18:51:36.514141       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 18:51:50.617172       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.83.31"}
	I1009 18:51:56.171593       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 18:51:56.217101       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.49.26"}
	I1009 18:52:00.271014       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.161.56"}
	I1009 18:52:10.738965       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.70.29"}
	E1009 18:52:26.105619       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49608: use of closed network connection
	E1009 18:52:34.343315       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49614: use of closed network connection
	
	
	==> kube-controller-manager [4f41513d2449] <==
	I1009 18:50:31.446043       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1009 18:50:31.447849       1 shared_informer.go:320] Caches are synced for namespace
	I1009 18:50:31.491931       1 shared_informer.go:320] Caches are synced for persistent volume
	I1009 18:50:31.491961       1 shared_informer.go:320] Caches are synced for ephemeral
	I1009 18:50:31.491972       1 shared_informer.go:320] Caches are synced for taint
	I1009 18:50:31.492056       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 18:50:31.492102       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-517000"
	I1009 18:50:31.492146       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 18:50:31.491976       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1009 18:50:31.494077       1 shared_informer.go:320] Caches are synced for deployment
	I1009 18:50:31.494111       1 shared_informer.go:320] Caches are synced for job
	I1009 18:50:31.495283       1 shared_informer.go:320] Caches are synced for attach detach
	I1009 18:50:31.495807       1 shared_informer.go:320] Caches are synced for disruption
	I1009 18:50:31.498668       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1009 18:50:31.591686       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1009 18:50:31.597264       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 18:50:31.610814       1 shared_informer.go:320] Caches are synced for cronjob
	I1009 18:50:31.645542       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 18:50:31.883596       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="441.460713ms"
	I1009 18:50:31.883732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.279µs"
	I1009 18:50:32.046099       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 18:50:32.144697       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 18:50:32.144775       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 18:51:03.237275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.581032ms"
	I1009 18:51:03.237463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.56µs"
	
	
	==> kube-controller-manager [53f95f20007e] <==
	I1009 18:51:36.359996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.004µs"
	I1009 18:51:36.726636       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 18:51:36.805146       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 18:51:36.805353       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 18:51:42.805929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.76869ms"
	I1009 18:51:42.806852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.714µs"
	I1009 18:51:56.179925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="6.816455ms"
	I1009 18:51:56.184788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="4.645158ms"
	I1009 18:51:56.192963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.046884ms"
	I1009 18:51:56.193123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="115.498µs"
	I1009 18:52:01.751335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.383µs"
	I1009 18:52:02.794106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="39.722µs"
	I1009 18:52:03.199286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-517000"
	I1009 18:52:03.798035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.3µs"
	I1009 18:52:10.697745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.66103ms"
	I1009 18:52:10.702909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.121928ms"
	I1009 18:52:10.730593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.658844ms"
	I1009 18:52:10.730754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.969µs"
	I1009 18:52:11.902453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.093µs"
	I1009 18:52:12.938626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="55.81µs"
	I1009 18:52:16.978678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.218µs"
	I1009 18:52:28.354944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="120.581µs"
	I1009 18:52:28.363944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.342µs"
	I1009 18:52:29.177434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="68.272µs"
	I1009 18:52:33.943954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-517000"
	
	
	==> kube-proxy [3afd220050b2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 18:51:33.928148       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 18:51:33.931682       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1009 18:51:33.931751       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:51:33.939626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 18:51:33.939683       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:51:33.939714       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:51:33.940695       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:51:33.941055       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:51:33.941066       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:51:33.941774       1 config.go:199] "Starting service config controller"
	I1009 18:51:33.941889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:51:33.941906       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:51:33.941978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:51:33.942279       1 config.go:328] "Starting node config controller"
	I1009 18:51:33.942309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:51:34.046238       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:51:34.046310       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:51:34.046320       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a190ed3834e8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 18:50:29.295698       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 18:50:29.300246       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1009 18:50:29.300276       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:50:29.307841       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 18:50:29.307857       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:50:29.307867       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:50:29.308552       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:50:29.308650       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:50:29.308658       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:50:29.309078       1 config.go:199] "Starting service config controller"
	I1009 18:50:29.309087       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:50:29.309098       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:50:29.309100       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:50:29.309278       1 config.go:328] "Starting node config controller"
	I1009 18:50:29.309280       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:50:29.409279       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 18:50:29.409302       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:50:29.409308       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [41dcefdb4ed2] <==
	I1009 18:50:26.917891       1 serving.go:386] Generated self-signed cert in-memory
	W1009 18:50:28.187286       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 18:50:28.187410       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 18:50:28.187436       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 18:50:28.187471       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 18:50:28.211541       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1009 18:50:28.211555       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:50:28.212452       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:50:28.212488       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 18:50:28.212555       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 18:50:28.212591       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 18:50:28.313444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 18:51:16.176997       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8ea1f92e3e2d] <==
	I1009 18:51:31.606982       1 serving.go:386] Generated self-signed cert in-memory
	W1009 18:51:32.762648       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 18:51:32.762664       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 18:51:32.762669       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 18:51:32.762672       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 18:51:32.781057       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1009 18:51:32.785882       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:51:32.786840       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 18:51:32.786913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 18:51:32.786955       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 18:51:32.786980       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 18:51:32.887131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 18:52:26 functional-517000 kubelet[6339]: I1009 18:52:26.507628    6339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xf2fw\" (UniqueName: \"kubernetes.io/projected/253b33d5-24b7-4ae5-bfcd-d4efcb2dd404-kube-api-access-xf2fw\") on node \"functional-517000\" DevicePath \"\""
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.129528    6339 scope.go:117] "RemoveContainer" containerID="a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.151706    6339 scope.go:117] "RemoveContainer" containerID="a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: E1009 18:52:27.152654    6339 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4" containerID="a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.152695    6339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4"} err="failed to get container status \"a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4\": rpc error: code = Unknown desc = Error response from daemon: No such container: a0c7baf0d213204a1786e5ef5bc2866848d9140e33735af557168835f808d4f4"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: E1009 18:52:27.226450    6339 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="253b33d5-24b7-4ae5-bfcd-d4efcb2dd404" containerName="myfrontend"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.226484    6339 memory_manager.go:354] "RemoveStaleState removing state" podUID="253b33d5-24b7-4ae5-bfcd-d4efcb2dd404" containerName="myfrontend"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.418434    6339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlpr\" (UniqueName: \"kubernetes.io/projected/27c51341-669e-4026-b47c-799bff69ad8d-kube-api-access-7zlpr\") pod \"sp-pod\" (UID: \"27c51341-669e-4026-b47c-799bff69ad8d\") " pod="default/sp-pod"
	Oct 09 18:52:27 functional-517000 kubelet[6339]: I1009 18:52:27.418468    6339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b\" (UniqueName: \"kubernetes.io/host-path/27c51341-669e-4026-b47c-799bff69ad8d-pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b\") pod \"sp-pod\" (UID: \"27c51341-669e-4026-b47c-799bff69ad8d\") " pod="default/sp-pod"
	Oct 09 18:52:28 functional-517000 kubelet[6339]: I1009 18:52:28.344725    6339 scope.go:117] "RemoveContainer" containerID="f3d812a18b35dd523c636f83bf1a0e67613a07aa92e24576ae01e01195b58427"
	Oct 09 18:52:28 functional-517000 kubelet[6339]: E1009 18:52:28.344898    6339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-5rqvj_default(2afe3a72-b0b3-4d6f-a81a-8ccadcc5b83b)\"" pod="default/hello-node-64b4f8f9ff-5rqvj" podUID="2afe3a72-b0b3-4d6f-a81a-8ccadcc5b83b"
	Oct 09 18:52:28 functional-517000 kubelet[6339]: I1009 18:52:28.345380    6339 scope.go:117] "RemoveContainer" containerID="ffefef26381de5b6d3876f8d375fcf467d521dbc034bcac664a39e4cc4d07d26"
	Oct 09 18:52:28 functional-517000 kubelet[6339]: I1009 18:52:28.353051    6339 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253b33d5-24b7-4ae5-bfcd-d4efcb2dd404" path="/var/lib/kubelet/pods/253b33d5-24b7-4ae5-bfcd-d4efcb2dd404/volumes"
	Oct 09 18:52:29 functional-517000 kubelet[6339]: I1009 18:52:29.168280    6339 scope.go:117] "RemoveContainer" containerID="ffefef26381de5b6d3876f8d375fcf467d521dbc034bcac664a39e4cc4d07d26"
	Oct 09 18:52:29 functional-517000 kubelet[6339]: I1009 18:52:29.168610    6339 scope.go:117] "RemoveContainer" containerID="93962dd42a5009524726a8dd83795622ace99a226632873d250e32615a7f4e10"
	Oct 09 18:52:29 functional-517000 kubelet[6339]: E1009 18:52:29.168753    6339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-88jjn_default(132ecdf9-5e02-4865-a825-6a9ee4e20459)\"" pod="default/hello-node-connect-65d86f57f4-88jjn" podUID="132ecdf9-5e02-4865-a825-6a9ee4e20459"
	Oct 09 18:52:29 functional-517000 kubelet[6339]: I1009 18:52:29.190980    6339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.46262694 podStartE2EDuration="2.190963866s" podCreationTimestamp="2024-10-09 18:52:27 +0000 UTC" firstStartedPulling="2024-10-09 18:52:27.951490921 +0000 UTC m=+57.677953463" lastFinishedPulling="2024-10-09 18:52:28.679827847 +0000 UTC m=+58.406290389" observedRunningTime="2024-10-09 18:52:29.190730415 +0000 UTC m=+58.917192957" watchObservedRunningTime="2024-10-09 18:52:29.190963866 +0000 UTC m=+58.917426408"
	Oct 09 18:52:30 functional-517000 kubelet[6339]: E1009 18:52:30.350162    6339 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 18:52:30 functional-517000 kubelet[6339]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 18:52:30 functional-517000 kubelet[6339]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 18:52:30 functional-517000 kubelet[6339]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 18:52:30 functional-517000 kubelet[6339]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 18:52:30 functional-517000 kubelet[6339]: I1009 18:52:30.414612    6339 scope.go:117] "RemoveContainer" containerID="038318400a7c4918abd93e51f230b4c263d77ac221167d0c17e4eabaeba94ae1"
	Oct 09 18:52:36 functional-517000 kubelet[6339]: I1009 18:52:36.820056    6339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/13c3b555-4808-498a-823e-c7a3b19d83de-test-volume\") pod \"busybox-mount\" (UID: \"13c3b555-4808-498a-823e-c7a3b19d83de\") " pod="default/busybox-mount"
	Oct 09 18:52:36 functional-517000 kubelet[6339]: I1009 18:52:36.820099    6339 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zpk\" (UniqueName: \"kubernetes.io/projected/13c3b555-4808-498a-823e-c7a3b19d83de-kube-api-access-s5zpk\") pod \"busybox-mount\" (UID: \"13c3b555-4808-498a-823e-c7a3b19d83de\") " pod="default/busybox-mount"
	
	
	==> storage-provisioner [0e3ab9139e53] <==
	I1009 18:51:33.864064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:51:33.874348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:51:33.874371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:51:51.294148       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:51:51.294235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-517000_5c007312-c6d3-472d-93ed-ed6e35fffb77!
	I1009 18:51:51.294357       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02e034d7-b5aa-457e-9bad-251a4e3e094f", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-517000_5c007312-c6d3-472d-93ed-ed6e35fffb77 became leader
	I1009 18:51:51.397632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-517000_5c007312-c6d3-472d-93ed-ed6e35fffb77!
	I1009 18:52:13.885790       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1009 18:52:13.886204       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"763519d8-64e0-4a50-86fa-8e1c5c946c9b", APIVersion:"v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1009 18:52:13.885893       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    723a013f-6e9b-46d8-a28a-8b5817613a1c 330 0 2024-10-09 18:50:03 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-09 18:50:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  763519d8-64e0-4a50-86fa-8e1c5c946c9b 749 0 2024-10-09 18:52:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-09 18:52:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-09 18:52:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1009 18:52:13.887320       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b" provisioned
	I1009 18:52:13.887376       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1009 18:52:13.887930       1 volume_store.go:212] Trying to save persistentvolume "pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b"
	I1009 18:52:13.894322       1 volume_store.go:219] persistentvolume "pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b" saved
	I1009 18:52:13.895533       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"763519d8-64e0-4a50-86fa-8e1c5c946c9b", APIVersion:"v1", ResourceVersion:"749", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-763519d8-64e0-4a50-86fa-8e1c5c946c9b
	
	
	==> storage-provisioner [dcaae86796df] <==
	I1009 18:50:41.842806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:50:41.848131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:50:41.848150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:50:59.253533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:50:59.253671       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-517000_bfd41cbb-8737-45ab-8552-9c9e57dadfc2!
	I1009 18:50:59.253726       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02e034d7-b5aa-457e-9bad-251a4e3e094f", APIVersion:"v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-517000_bfd41cbb-8737-45ab-8552-9c9e57dadfc2 became leader
	I1009 18:50:59.355081       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-517000_bfd41cbb-8737-45ab-8552-9c9e57dadfc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-517000 -n functional-517000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-517000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-517000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-517000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-517000/192.168.105.4
	Start Time:       Wed, 09 Oct 2024 11:52:36 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s5zpk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s5zpk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/busybox-mount to functional-517000
	  Normal  Pulling    1s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (27.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-845000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1009 11:56:56.178250    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.185998    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.199393    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.222623    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.266065    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.349523    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.512965    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:56.836680    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:57.480373    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:56:58.764186    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:57:01.327960    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:57:06.451636    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:57:16.695262    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:57:37.178817    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:58:18.142114    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 11:59:40.064935    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:01:56.174172    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:02:23.906759    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-845000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.311550625s)

                                                
                                                
-- stdout --
	* [ha-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-845000" primary control-plane node in "ha-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 11:52:52.354322    2345 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:52:52.354466    2345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:52.354469    2345 out.go:358] Setting ErrFile to fd 2...
	I1009 11:52:52.354472    2345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:52.354624    2345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:52:52.355781    2345 out.go:352] Setting JSON to false
	I1009 11:52:52.375288    2345 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1342,"bootTime":1728498630,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:52:52.375391    2345 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:52:52.378925    2345 out.go:177] * [ha-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:52:52.387009    2345 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 11:52:52.387029    2345 notify.go:220] Checking for updates...
	I1009 11:52:52.393930    2345 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:52:52.401932    2345 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:52:52.409947    2345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:52:52.413957    2345 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 11:52:52.416985    2345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 11:52:52.420131    2345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:52:52.423868    2345 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 11:52:52.430948    2345 start.go:297] selected driver: qemu2
	I1009 11:52:52.430956    2345 start.go:901] validating driver "qemu2" against <nil>
	I1009 11:52:52.430962    2345 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 11:52:52.433869    2345 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 11:52:52.436929    2345 out.go:177] * Automatically selected the socket_vmnet network
	I1009 11:52:52.440042    2345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 11:52:52.440061    2345 cni.go:84] Creating CNI manager for ""
	I1009 11:52:52.440079    2345 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 11:52:52.440082    2345 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 11:52:52.440123    2345 start.go:340] cluster config:
	{Name:ha-845000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:52:52.444946    2345 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:52:52.452936    2345 out.go:177] * Starting "ha-845000" primary control-plane node in "ha-845000" cluster
	I1009 11:52:52.456959    2345 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:52:52.456980    2345 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 11:52:52.456993    2345 cache.go:56] Caching tarball of preloaded images
	I1009 11:52:52.457068    2345 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 11:52:52.457073    2345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 11:52:52.457296    2345 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/ha-845000/config.json ...
	I1009 11:52:52.457306    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/ha-845000/config.json: {Name:mk06ce34843d17a9d7a74714120ca6bbe30954eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:52:52.457619    2345 start.go:360] acquireMachinesLock for ha-845000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 11:52:52.457661    2345 start.go:364] duration metric: took 37.041µs to acquireMachinesLock for "ha-845000"
	I1009 11:52:52.457670    2345 start.go:93] Provisioning new machine with config: &{Name:ha-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 11:52:52.457703    2345 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 11:52:52.460999    2345 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 11:52:52.482585    2345 start.go:159] libmachine.API.Create for "ha-845000" (driver="qemu2")
	I1009 11:52:52.482611    2345 client.go:168] LocalClient.Create starting
	I1009 11:52:52.482695    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 11:52:52.482733    2345 main.go:141] libmachine: Decoding PEM data...
	I1009 11:52:52.482750    2345 main.go:141] libmachine: Parsing certificate...
	I1009 11:52:52.482793    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 11:52:52.482823    2345 main.go:141] libmachine: Decoding PEM data...
	I1009 11:52:52.482830    2345 main.go:141] libmachine: Parsing certificate...
	I1009 11:52:52.483170    2345 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 11:52:52.750714    2345 main.go:141] libmachine: Creating SSH key...
	I1009 11:52:52.830799    2345 main.go:141] libmachine: Creating Disk image...
	I1009 11:52:52.830805    2345 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 11:52:52.830996    2345 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:52:52.842452    2345 main.go:141] libmachine: STDOUT: 
	I1009 11:52:52.842470    2345 main.go:141] libmachine: STDERR: 
	I1009 11:52:52.842533    2345 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2 +20000M
	I1009 11:52:52.851114    2345 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 11:52:52.851131    2345 main.go:141] libmachine: STDERR: 
	I1009 11:52:52.851142    2345 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:52:52.851155    2345 main.go:141] libmachine: Starting QEMU VM...
	I1009 11:52:52.851166    2345 qemu.go:418] Using hvf for hardware acceleration
	I1009 11:52:52.851199    2345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:47:a4:45:59:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:52:52.890484    2345 main.go:141] libmachine: STDOUT: 
	I1009 11:52:52.890510    2345 main.go:141] libmachine: STDERR: 
	I1009 11:52:52.890515    2345 main.go:141] libmachine: Attempt 0
	I1009 11:52:52.890530    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:52:52.890624    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:52:52.890645    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:52:52.890658    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:52:52.890670    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:52:52.890677    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:52:54.892841    2345 main.go:141] libmachine: Attempt 1
	I1009 11:52:54.892915    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:52:54.893304    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:52:54.893359    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:52:54.893429    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:52:54.893463    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:52:54.893499    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:52:56.895698    2345 main.go:141] libmachine: Attempt 2
	I1009 11:52:56.895818    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:52:56.896181    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:52:56.896235    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:52:56.896271    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:52:56.896346    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:52:56.896380    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:52:58.898577    2345 main.go:141] libmachine: Attempt 3
	I1009 11:52:58.898619    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:52:58.898766    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:52:58.898781    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:52:58.898790    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:52:58.898794    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:52:58.898799    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:53:00.900825    2345 main.go:141] libmachine: Attempt 4
	I1009 11:53:00.900833    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:53:00.900880    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:53:00.900886    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:53:00.900895    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:53:00.900903    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:53:00.900907    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:53:02.901953    2345 main.go:141] libmachine: Attempt 5
	I1009 11:53:02.901960    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:53:02.902002    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:53:02.902008    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:53:02.902014    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:53:02.902019    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:53:02.902024    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:53:04.904054    2345 main.go:141] libmachine: Attempt 6
	I1009 11:53:04.904079    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:53:04.904167    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1009 11:53:04.904180    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:53:04.904186    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:53:04.904191    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:53:04.904195    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:53:06.906254    2345 main.go:141] libmachine: Attempt 7
	I1009 11:53:06.906319    2345 main.go:141] libmachine: Searching for 62:47:a4:45:59:9e in /var/db/dhcpd_leases ...
	I1009 11:53:06.906461    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:53:06.906484    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:53:06.906487    2345 main.go:141] libmachine: Found match: 62:47:a4:45:59:9e
	I1009 11:53:06.906497    2345 main.go:141] libmachine: IP: 192.168.105.5
	I1009 11:53:06.906501    2345 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1009 11:58:52.480573    2345 start.go:128] duration metric: took 6m0.027171167s to createHost
	I1009 11:58:52.480660    2345 start.go:83] releasing machines lock for "ha-845000", held for 6m0.0273465s
	W1009 11:58:52.481344    2345 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1009 11:58:52.491814    2345 out.go:177] * Deleting "ha-845000" in qemu2 ...
	W1009 11:58:52.533419    2345 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1009 11:58:52.533453    2345 start.go:729] Will try again in 5 seconds ...
	I1009 11:58:57.535617    2345 start.go:360] acquireMachinesLock for ha-845000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 11:58:57.536254    2345 start.go:364] duration metric: took 521.667µs to acquireMachinesLock for "ha-845000"
	I1009 11:58:57.536399    2345 start.go:93] Provisioning new machine with config: &{Name:ha-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 11:58:57.536686    2345 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 11:58:57.541306    2345 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 11:58:57.592290    2345 start.go:159] libmachine.API.Create for "ha-845000" (driver="qemu2")
	I1009 11:58:57.592339    2345 client.go:168] LocalClient.Create starting
	I1009 11:58:57.592499    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 11:58:57.592580    2345 main.go:141] libmachine: Decoding PEM data...
	I1009 11:58:57.592606    2345 main.go:141] libmachine: Parsing certificate...
	I1009 11:58:57.592675    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 11:58:57.592731    2345 main.go:141] libmachine: Decoding PEM data...
	I1009 11:58:57.592750    2345 main.go:141] libmachine: Parsing certificate...
	I1009 11:58:57.593335    2345 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 11:58:57.747565    2345 main.go:141] libmachine: Creating SSH key...
	I1009 11:58:57.779389    2345 main.go:141] libmachine: Creating Disk image...
	I1009 11:58:57.779395    2345 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 11:58:57.779581    2345 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:58:57.789409    2345 main.go:141] libmachine: STDOUT: 
	I1009 11:58:57.789437    2345 main.go:141] libmachine: STDERR: 
	I1009 11:58:57.789502    2345 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2 +20000M
	I1009 11:58:57.798102    2345 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 11:58:57.798119    2345 main.go:141] libmachine: STDERR: 
	I1009 11:58:57.798135    2345 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:58:57.798140    2345 main.go:141] libmachine: Starting QEMU VM...
	I1009 11:58:57.798146    2345 qemu.go:418] Using hvf for hardware acceleration
	I1009 11:58:57.798191    2345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d5:b1:e5:38:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 11:58:57.834754    2345 main.go:141] libmachine: STDOUT: 
	I1009 11:58:57.834783    2345 main.go:141] libmachine: STDERR: 
	I1009 11:58:57.834787    2345 main.go:141] libmachine: Attempt 0
	I1009 11:58:57.834800    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:58:57.834917    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:58:57.834928    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:58:57.834938    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:58:57.834944    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:58:57.834959    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:58:57.834967    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:58:59.837110    2345 main.go:141] libmachine: Attempt 1
	I1009 11:58:59.837201    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:58:59.837708    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:58:59.837760    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:58:59.837790    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:58:59.837819    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:58:59.837852    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:58:59.837884    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:01.838487    2345 main.go:141] libmachine: Attempt 2
	I1009 11:59:01.838565    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:01.839143    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:59:01.839198    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:59:01.839228    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:59:01.839256    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:59:01.839298    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:59:01.839325    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:03.841360    2345 main.go:141] libmachine: Attempt 3
	I1009 11:59:03.841439    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:03.841575    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:59:03.841588    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:59:03.841598    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:59:03.841604    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:59:03.841610    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:59:03.841617    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:05.843640    2345 main.go:141] libmachine: Attempt 4
	I1009 11:59:05.843652    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:05.843697    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:59:05.843706    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:59:05.843713    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:59:05.843718    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:59:05.843723    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:59:05.843727    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:07.845741    2345 main.go:141] libmachine: Attempt 5
	I1009 11:59:07.845752    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:07.845781    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:59:07.845787    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:59:07.845791    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:59:07.845812    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:59:07.845817    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:59:07.845823    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:09.847848    2345 main.go:141] libmachine: Attempt 6
	I1009 11:59:09.847876    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:09.847960    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1009 11:59:09.847970    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:62:47:a4:45:59:9e ID:1,62:47:a4:45:59:9e Lease:0x6706df21}
	I1009 11:59:09.847976    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:de:f4:9f:dd:3:a5 ID:1,de:f4:9f:dd:3:a5 Lease:0x6706de52}
	I1009 11:59:09.847990    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6:de:1f:71:7f:2 ID:1,6:de:1f:71:7f:2 Lease:0x6706d000}
	I1009 11:59:09.847995    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:fa:15:2:2:89:77 ID:1,fa:15:2:2:89:77 Lease:0x6706dd99}
	I1009 11:59:09.848001    2345 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6706d9ed}
	I1009 11:59:11.850079    2345 main.go:141] libmachine: Attempt 7
	I1009 11:59:11.850101    2345 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 11:59:11.850173    2345 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1009 11:59:11.850196    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:da:d5:b1:e5:38:54 ID:1,da:d5:b1:e5:38:54 Lease:0x6706e08e}
	I1009 11:59:11.850199    2345 main.go:141] libmachine: Found match: da:d5:b1:e5:38:54
	I1009 11:59:11.850216    2345 main.go:141] libmachine: IP: 192.168.105.6
	I1009 11:59:11.850221    2345 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1009 12:04:57.590312    2345 start.go:128] duration metric: took 6m0.057926083s to createHost
	I1009 12:04:57.590378    2345 start.go:83] releasing machines lock for "ha-845000", held for 6m0.058455041s
	W1009 12:04:57.590600    2345 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-845000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-845000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1009 12:04:57.598152    2345 out.go:201] 
	W1009 12:04:57.601275    2345 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1009 12:04:57.601362    2345 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1009 12:04:57.601391    2345 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1009 12:04:57.604167    2345 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-845000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (71.531708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:04:57.695942    2721 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:04:57.695949    2721 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (117.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (69.568084ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-845000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- rollout status deployment/busybox: exit status 1 (63.192542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.321125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:04:57.892422    1686 retry.go:31] will retry after 958.928882ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.800541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:04:58.961513    1686 retry.go:31] will retry after 2.052877222s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.658292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:01.126383    1686 retry.go:31] will retry after 2.952611142s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.3245ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:04.189675    1686 retry.go:31] will retry after 1.82468941s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.103833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:06.119735    1686 retry.go:31] will retry after 6.758554306s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.466917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:12.989128    1686 retry.go:31] will retry after 5.349976576s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.837458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:18.450300    1686 retry.go:31] will retry after 11.784852596s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.151209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:30.348505    1686 retry.go:31] will retry after 17.261510358s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.936334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:05:47.720265    1686 retry.go:31] will retry after 16.619877644s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.646083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:06:04.451142    1686 retry.go:31] will retry after 50.637462036s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.175375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.653667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.057291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.851ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.458625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.29425ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.483591    2788 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.483603    2788 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (117.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-845000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.450917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-845000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.144958ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.580395    2793 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.580400    2793 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-845000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-845000 -v=7 --alsologtostderr: exit status 50 (50.568792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:06:55.613553    2795 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:06:55.613808    2795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:55.613811    2795 out.go:358] Setting ErrFile to fd 2...
	I1009 12:06:55.613814    2795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:55.613950    2795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:06:55.614177    2795 mustload.go:65] Loading cluster: ha-845000
	I1009 12:06:55.614386    2795 config.go:182] Loaded profile config "ha-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:06:55.615089    2795 host.go:66] Checking if "ha-845000" exists ...
	I1009 12:06:55.620210    2795 out.go:201] 
	W1009 12:06:55.623177    2795 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-845000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-845000 endpoint: failed to lookup ip for ""
	W1009 12:06:55.623201    2795 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1009 12:06:55.627208    2795 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-845000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (34.888583ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.666128    2797 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.666141    2797 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-845000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-845000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.192708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-845000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-845000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-845000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.51125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.729170    2800 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.729177    2800 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-845000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-845000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-845000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-845000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-845000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-845000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-845000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-845000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.268125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.816742    2805 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.816750    2805 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-845000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-845000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.39875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:06:55.886623    2809 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:06:55.886910    2809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:55.886913    2809 out.go:358] Setting ErrFile to fd 2...
	I1009 12:06:55.886915    2809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:55.887041    2809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:06:55.887287    2809 mustload.go:65] Loading cluster: ha-845000
	I1009 12:06:55.887506    2809 config.go:182] Loaded profile config "ha-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:06:55.891212    2809 out.go:201] 
	W1009 12:06:55.894201    2809 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 12:06:55.894206    2809 out.go:270] * 
	* 
	W1009 12:06:55.895630    2809 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:06:55.900128    2809 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-845000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (34.685125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:55.973717    2813 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:55.973725    2813 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-845000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-845000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-845000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-845000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.451125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:56.061568    2818 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:56.061578    2818 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-845000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-845000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.445708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:06:56.094650    2820 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:06:56.094921    2820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:56.094925    2820 out.go:358] Setting ErrFile to fd 2...
	I1009 12:06:56.094927    2820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:06:56.095060    2820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:06:56.095335    2820 mustload.go:65] Loading cluster: ha-845000
	I1009 12:06:56.095543    2820 config.go:182] Loaded profile config "ha-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:06:56.100226    2820 out.go:201] 
	W1009 12:06:56.103225    2820 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 12:06:56.103242    2820 out.go:270] * 
	* 
	W1009 12:06:56.104653    2820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:06:56.108132    2820 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1009 12:06:56.094650    2820 out.go:345] Setting OutFile to fd 1 ...
I1009 12:06:56.094921    2820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 12:06:56.094925    2820 out.go:358] Setting ErrFile to fd 2...
I1009 12:06:56.094927    2820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 12:06:56.095060    2820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 12:06:56.095335    2820 mustload.go:65] Loading cluster: ha-845000
I1009 12:06:56.095543    2820 config.go:182] Loaded profile config "ha-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 12:06:56.100226    2820 out.go:201] 
W1009 12:06:56.103225    2820 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1009 12:06:56.103242    2820 out.go:270] * 
* 
W1009 12:06:56.104653    2820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1009 12:06:56.108132    2820 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-845000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-845000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
E1009 12:06:56.170121    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (32.846666ms)

                                                
                                                
** stderr ** 
	E1009 12:06:56.176903    2825 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1009 12:06:56.177393    2825 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1009 12:06:56.178495    2825 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1009 12:06:56.178790    2825 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1009 12:06:56.180118    2825 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.610875ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:56.215610    2828 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:56.215616    2828 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-845000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-845000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-845000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-845000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-845000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-845000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-845000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-845000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (34.428917ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:06:56.304715    2833 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:06:56.304725    2833 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (956.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-845000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-845000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-845000 -v=7 --alsologtostderr: (6.352584917s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-845000 --wait=true -v=7 --alsologtostderr
E1009 12:11:56.162036    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:13:19.257640    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:16:56.158017    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:21:56.154048    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-845000 --wait=true -v=7 --alsologtostderr: signal: killed (15m49.619040125s)

                                                
                                                
-- stdout --
	* [ha-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-845000" primary control-plane node in "ha-845000" cluster
	* Restarting existing qemu2 VM for "ha-845000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:07:02.760402    2857 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:07:02.760580    2857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:07:02.760584    2857 out.go:358] Setting ErrFile to fd 2...
	I1009 12:07:02.760587    2857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:07:02.760741    2857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:07:02.761999    2857 out.go:352] Setting JSON to false
	I1009 12:07:02.781646    2857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2192,"bootTime":1728498630,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:07:02.781713    2857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:07:02.787022    2857 out.go:177] * [ha-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:07:02.794970    2857 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:07:02.795057    2857 notify.go:220] Checking for updates...
	I1009 12:07:02.801941    2857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:07:02.804933    2857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:07:02.807857    2857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:07:02.810986    2857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:07:02.813960    2857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:07:02.815532    2857 config.go:182] Loaded profile config "ha-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:07:02.815583    2857 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:07:02.820007    2857 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:07:02.826792    2857 start.go:297] selected driver: qemu2
	I1009 12:07:02.826799    2857 start.go:901] validating driver "qemu2" against &{Name:ha-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:07:02.826854    2857 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:07:02.829437    2857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:07:02.829457    2857 cni.go:84] Creating CNI manager for ""
	I1009 12:07:02.829481    2857 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 12:07:02.829529    2857 start.go:340] cluster config:
	{Name:ha-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-845000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:07:02.833927    2857 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:07:02.841964    2857 out.go:177] * Starting "ha-845000" primary control-plane node in "ha-845000" cluster
	I1009 12:07:02.845947    2857 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:07:02.845965    2857 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:07:02.845975    2857 cache.go:56] Caching tarball of preloaded images
	I1009 12:07:02.846058    2857 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:07:02.846064    2857 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:07:02.846128    2857 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/ha-845000/config.json ...
	I1009 12:07:02.846506    2857 start.go:360] acquireMachinesLock for ha-845000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:07:02.846552    2857 start.go:364] duration metric: took 40.458µs to acquireMachinesLock for "ha-845000"
	I1009 12:07:02.846569    2857 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:07:02.846573    2857 fix.go:54] fixHost starting: 
	I1009 12:07:02.846687    2857 fix.go:112] recreateIfNeeded on ha-845000: state=Stopped err=<nil>
	W1009 12:07:02.846701    2857 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:07:02.854893    2857 out.go:177] * Restarting existing qemu2 VM for "ha-845000" ...
	I1009 12:07:02.858926    2857 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:07:02.858972    2857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d5:b1:e5:38:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/ha-845000/disk.qcow2
	I1009 12:07:02.898826    2857 main.go:141] libmachine: STDOUT: 
	I1009 12:07:02.898859    2857 main.go:141] libmachine: STDERR: 
	I1009 12:07:02.898864    2857 main.go:141] libmachine: Attempt 0
	I1009 12:07:02.898876    2857 main.go:141] libmachine: Searching for da:d5:b1:e5:38:54 in /var/db/dhcpd_leases ...
	I1009 12:07:02.898987    2857 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1009 12:07:02.899008    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:da:d5:b1:e5:38:54 ID:1,da:d5:b1:e5:38:54 Lease:0x6706d453}
	I1009 12:07:02.899018    2857 main.go:141] libmachine: Found match: da:d5:b1:e5:38:54
	I1009 12:07:02.899027    2857 main.go:141] libmachine: IP: 192.168.105.6
	I1009 12:07:02.899032    2857 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-845000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-845000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-845000: context deadline exceeded (542ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-845000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-845000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-845000 -n ha-845000: exit status 7 (35.225667ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 12:22:52.329626    2964 status.go:393] failed to get driver ip: parsing IP: 
	E1009 12:22:52.329631    2964 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-845000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (956.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1009 12:26:56.088151    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:29:59.182908    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:31:56.079447    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.258950041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a1a5c85-f4a4-4423-9097-1c37632b98f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8954e43-f999-4433-af35-307f13510c48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"95eade90-ac62-48a2-9b44-0addcf9b04b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig"}}
	{"specversion":"1.0","id":"66d4fa3b-2f64-4c9a-bcbb-db31b0cad623","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3f507665-79d8-4bd0-9e94-7bbb95cc0f03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b06aedc-3535-4a91-8914-c518303e30fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube"}}
	{"specversion":"1.0","id":"305bf25a-9751-438e-821a-a49fca940593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca00e8c0-c7b9-425f-9345-aeb0de3d7ea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f9c8bfd-f48e-46e9-804e-bef64e3ccfa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"126ea01f-ecac-4635-bcb9-0464c8075e4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-945000\" primary control-plane node in \"json-output-945000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"91a68ea9-6db2-4cee-a4a0-b20011433a5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"31543dd5-c5c7-4586-9e0c-cf53b253747e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-945000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"dcf91dd9-6e8e-48a7-9d7b-25c4d7454799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"d944d264-eaf8-4a36-9a6e-c632020aa821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e07ec548-7ed4-4e2d-b363-5fa2ce6ffc7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-945000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"b3141b21-d6b4-4183-8071-4fa2fcaeacb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.26s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-945000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7a1a5c85-f4a4-4423-9097-1c37632b98f9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b8954e43-f999-4433-af35-307f13510c48
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19780"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 95eade90-ac62-48a2-9b44-0addcf9b04b1
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 66d4fa3b-2f64-4c9a-bcbb-db31b0cad623
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3f507665-79d8-4bd0-9e94-7bbb95cc0f03
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5b06aedc-3535-4a91-8914-c518303e30fc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 305bf25a-9751-438e-821a-a49fca940593
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ca00e8c0-c7b9-425f-9345-aeb0de3d7ea8
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9f9c8bfd-f48e-46e9-804e-bef64e3ccfa9
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 126ea01f-ecac-4635-bcb9-0464c8075e4d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-945000\" primary control-plane node in \"json-output-945000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 91a68ea9-6db2-4cee-a4a0-b20011433a5c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 31543dd5-c5c7-4586-9e0c-cf53b253747e
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-945000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dcf91dd9-6e8e-48a7-9d7b-25c4d7454799
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d944d264-eaf8-4a36-9a6e-c632020aa821
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e07ec548-7ed4-4e2d-b363-5fa2ce6ffc7f
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-945000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b3141b21-d6b4-4183-8071-4fa2fcaeacb5
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7a1a5c85-f4a4-4423-9097-1c37632b98f9
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b8954e43-f999-4433-af35-307f13510c48
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19780"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 95eade90-ac62-48a2-9b44-0addcf9b04b1
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 66d4fa3b-2f64-4c9a-bcbb-db31b0cad623
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3f507665-79d8-4bd0-9e94-7bbb95cc0f03
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5b06aedc-3535-4a91-8914-c518303e30fc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 305bf25a-9751-438e-821a-a49fca940593
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ca00e8c0-c7b9-425f-9345-aeb0de3d7ea8
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9f9c8bfd-f48e-46e9-804e-bef64e3ccfa9
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 126ea01f-ecac-4635-bcb9-0464c8075e4d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-945000\" primary control-plane node in \"json-output-945000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 91a68ea9-6db2-4cee-a4a0-b20011433a5c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 31543dd5-c5c7-4586-9e0c-cf53b253747e
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-945000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dcf91dd9-6e8e-48a7-9d7b-25c4d7454799
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d944d264-eaf8-4a36-9a6e-c632020aa821
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e07ec548-7ed4-4e2d-b363-5fa2ce6ffc7f
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-945000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b3141b21-d6b4-4183-8071-4fa2fcaeacb5
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser: exit status 50 (89.049417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c4bae5e5-2b00-467d-886f-5abad8ede982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-945000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser: exit status 50 (59.538709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-945000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-567000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1009 12:36:56.071136    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-567000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.100800375s)

                                                
                                                
-- stdout --
	* [mount-start-1-567000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-567000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-567000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-567000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-567000 -n mount-start-1-567000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-567000 -n mount-start-1-567000: exit status 7 (75.516542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-567000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.18s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-341000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-341000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.900814958s)

                                                
                                                
-- stdout --
	* [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-341000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:37:02.665665    3508 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:37:02.665828    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:37:02.665831    3508 out.go:358] Setting ErrFile to fd 2...
	I1009 12:37:02.665834    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:37:02.665979    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:37:02.667119    3508 out.go:352] Setting JSON to false
	I1009 12:37:02.684876    3508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3992,"bootTime":1728498630,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:37:02.684952    3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:37:02.690065    3508 out.go:177] * [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:37:02.696886    3508 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:37:02.696955    3508 notify.go:220] Checking for updates...
	I1009 12:37:02.704059    3508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:37:02.705539    3508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:37:02.709022    3508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:37:02.712029    3508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:37:02.715045    3508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:37:02.718189    3508 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:37:02.721974    3508 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:37:02.728973    3508 start.go:297] selected driver: qemu2
	I1009 12:37:02.728980    3508 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:37:02.728987    3508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:37:02.731497    3508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:37:02.735092    3508 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:37:02.738128    3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:37:02.738142    3508 cni.go:84] Creating CNI manager for ""
	I1009 12:37:02.738168    3508 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 12:37:02.738172    3508 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 12:37:02.738213    3508 start.go:340] cluster config:
	{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:37:02.742781    3508 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:37:02.750983    3508 out.go:177] * Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	I1009 12:37:02.755020    3508 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:37:02.755042    3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:37:02.755049    3508 cache.go:56] Caching tarball of preloaded images
	I1009 12:37:02.755138    3508 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:37:02.755145    3508 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:37:02.755396    3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/multinode-341000/config.json ...
	I1009 12:37:02.755408    3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/multinode-341000/config.json: {Name:mk2a820be60e773136dfa3b093c8f36cecbbedc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:37:02.755782    3508 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:37:02.755834    3508 start.go:364] duration metric: took 44.791µs to acquireMachinesLock for "multinode-341000"
	I1009 12:37:02.755845    3508 start.go:93] Provisioning new machine with config: &{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:37:02.755915    3508 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:37:02.763953    3508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:37:02.782458    3508 start.go:159] libmachine.API.Create for "multinode-341000" (driver="qemu2")
	I1009 12:37:02.782491    3508 client.go:168] LocalClient.Create starting
	I1009 12:37:02.782559    3508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:37:02.782598    3508 main.go:141] libmachine: Decoding PEM data...
	I1009 12:37:02.782616    3508 main.go:141] libmachine: Parsing certificate...
	I1009 12:37:02.782663    3508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:37:02.782693    3508 main.go:141] libmachine: Decoding PEM data...
	I1009 12:37:02.782704    3508 main.go:141] libmachine: Parsing certificate...
	I1009 12:37:02.783155    3508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:37:02.926762    3508 main.go:141] libmachine: Creating SSH key...
	I1009 12:37:03.081974    3508 main.go:141] libmachine: Creating Disk image...
	I1009 12:37:03.081983    3508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:37:03.082159    3508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:03.091884    3508 main.go:141] libmachine: STDOUT: 
	I1009 12:37:03.091901    3508 main.go:141] libmachine: STDERR: 
	I1009 12:37:03.091963    3508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2 +20000M
	I1009 12:37:03.100474    3508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:37:03.100499    3508 main.go:141] libmachine: STDERR: 
	I1009 12:37:03.100517    3508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:03.100523    3508 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:37:03.100533    3508 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:37:03.100564    3508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d9:e0:7d:4e:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:03.102438    3508 main.go:141] libmachine: STDOUT: 
	I1009 12:37:03.102451    3508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:37:03.102469    3508 client.go:171] duration metric: took 319.98175ms to LocalClient.Create
	I1009 12:37:05.104600    3508 start.go:128] duration metric: took 2.348724417s to createHost
	I1009 12:37:05.104660    3508 start.go:83] releasing machines lock for "multinode-341000", held for 2.34888425s
	W1009 12:37:05.104728    3508 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:37:05.115588    3508 out.go:177] * Deleting "multinode-341000" in qemu2 ...
	W1009 12:37:05.141843    3508 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:37:05.141869    3508 start.go:729] Will try again in 5 seconds ...
	I1009 12:37:10.143985    3508 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:37:10.144654    3508 start.go:364] duration metric: took 534.416µs to acquireMachinesLock for "multinode-341000"
	I1009 12:37:10.144784    3508 start.go:93] Provisioning new machine with config: &{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:37:10.145103    3508 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:37:10.157658    3508 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:37:10.205709    3508 start.go:159] libmachine.API.Create for "multinode-341000" (driver="qemu2")
	I1009 12:37:10.205769    3508 client.go:168] LocalClient.Create starting
	I1009 12:37:10.205902    3508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:37:10.205980    3508 main.go:141] libmachine: Decoding PEM data...
	I1009 12:37:10.205996    3508 main.go:141] libmachine: Parsing certificate...
	I1009 12:37:10.206078    3508 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:37:10.206135    3508 main.go:141] libmachine: Decoding PEM data...
	I1009 12:37:10.206150    3508 main.go:141] libmachine: Parsing certificate...
	I1009 12:37:10.206733    3508 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:37:10.362779    3508 main.go:141] libmachine: Creating SSH key...
	I1009 12:37:10.468929    3508 main.go:141] libmachine: Creating Disk image...
	I1009 12:37:10.468934    3508 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:37:10.469114    3508 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:10.479042    3508 main.go:141] libmachine: STDOUT: 
	I1009 12:37:10.479070    3508 main.go:141] libmachine: STDERR: 
	I1009 12:37:10.479129    3508 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2 +20000M
	I1009 12:37:10.487617    3508 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:37:10.487633    3508 main.go:141] libmachine: STDERR: 
	I1009 12:37:10.487643    3508 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:10.487647    3508 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:37:10.487656    3508 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:37:10.487690    3508 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:c1:94:e1:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:37:10.489497    3508 main.go:141] libmachine: STDOUT: 
	I1009 12:37:10.489510    3508 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:37:10.489520    3508 client.go:171] duration metric: took 283.754125ms to LocalClient.Create
	I1009 12:37:12.491640    3508 start.go:128] duration metric: took 2.346579583s to createHost
	I1009 12:37:12.491688    3508 start.go:83] releasing machines lock for "multinode-341000", held for 2.347076291s
	W1009 12:37:12.492089    3508 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:37:12.504780    3508 out.go:201] 
	W1009 12:37:12.507690    3508 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:37:12.507713    3508 out.go:270] * 
	* 
	W1009 12:37:12.510223    3508 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:37:12.519657    3508 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-341000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (71.595417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (137.049333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-341000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- rollout status deployment/busybox: exit status 1 (63.132375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.685875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:12.871099    1686 retry.go:31] will retry after 655.435765ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.848583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:13.636687    1686 retry.go:31] will retry after 1.915729382s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.630625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:15.662377    1686 retry.go:31] will retry after 1.783700297s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.173709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:17.555580    1686 retry.go:31] will retry after 3.149972001s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.063667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:20.815835    1686 retry.go:31] will retry after 3.329112998s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.04975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:24.255283    1686 retry.go:31] will retry after 5.799374841s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.270083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:30.163225    1686 retry.go:31] will retry after 12.580838262s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.255792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:42.854286    1686 retry.go:31] will retry after 14.826903182s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.853084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 12:37:57.792075    1686 retry.go:31] will retry after 34.670906149s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.149708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.929083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.008625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.873167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.636125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (34.455125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-341000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.423875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (34.368625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-341000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-341000 -v 3 --alsologtostderr: exit status 83 (45.971291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-341000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-341000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:32.983939    3588 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:32.984354    3588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:32.984358    3588 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:32.984362    3588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:32.984493    3588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:32.984736    3588 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:32.984949    3588 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:32.989665    3588 out.go:177] * The control-plane node multinode-341000 host is not running: state=Stopped
	I1009 12:38:32.993612    3588 out.go:177]   To start a cluster, run: "minikube start -p multinode-341000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-341000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.741167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-341000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-341000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (33.327125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-341000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-341000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-341000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (34.54875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-341000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-341000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-341000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-341000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.600542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status --output json --alsologtostderr: exit status 7 (34.204ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-341000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:33.217945    3600 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:33.218112    3600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.218115    3600 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:33.218118    3600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.218246    3600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:33.218376    3600 out.go:352] Setting JSON to true
	I1009 12:38:33.218387    3600 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:33.218445    3600 notify.go:220] Checking for updates...
	I1009 12:38:33.218597    3600 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:33.218603    3600 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:33.218862    3600 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:33.218866    3600 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:33.218868    3600 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-341000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.693667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 node stop m03: exit status 85 (50.222083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-341000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status: exit status 7 (33.82ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr: exit status 7 (34.309083ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:33.370807    3608 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:33.370996    3608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.370999    3608 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:33.371001    3608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.371150    3608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:33.371274    3608 out.go:352] Setting JSON to false
	I1009 12:38:33.371284    3608 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:33.371355    3608 notify.go:220] Checking for updates...
	I1009 12:38:33.371512    3608 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:33.371524    3608 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:33.371777    3608 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:33.371781    3608 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:33.371783    3608 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr": multinode-341000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.930333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.6325ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:33.439305    3612 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:33.439610    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.439613    3612 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:33.439616    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.439747    3612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:33.439988    3612 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:33.440184    3612 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:33.444636    3612 out.go:201] 
	W1009 12:38:33.447600    3612 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1009 12:38:33.447605    3612 out.go:270] * 
	* 
	W1009 12:38:33.448984    3612 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:38:33.451594    3612 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1009 12:38:33.439305    3612 out.go:345] Setting OutFile to fd 1 ...
I1009 12:38:33.439610    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 12:38:33.439613    3612 out.go:358] Setting ErrFile to fd 2...
I1009 12:38:33.439616    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 12:38:33.439747    3612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 12:38:33.439988    3612 mustload.go:65] Loading cluster: multinode-341000
I1009 12:38:33.440184    3612 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 12:38:33.444636    3612 out.go:201] 
W1009 12:38:33.447600    3612 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1009 12:38:33.447605    3612 out.go:270] * 
* 
W1009 12:38:33.448984    3612 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1009 12:38:33.451594    3612 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-341000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (34.042958ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:33.487917    3614 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:33.488096    3614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.488099    3614 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:33.488101    3614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:33.488239    3614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:33.488361    3614 out.go:352] Setting JSON to false
	I1009 12:38:33.488370    3614 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:33.488435    3614 notify.go:220] Checking for updates...
	I1009 12:38:33.488584    3614 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:33.488590    3614 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:33.488837    3614 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:33.488841    3614 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:33.488843    3614 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:33.489666    1686 retry.go:31] will retry after 1.310803141s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (80.942417ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:34.881538    3616 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:34.881762    3616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:34.881766    3616 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:34.881769    3616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:34.881928    3616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:34.882112    3616 out.go:352] Setting JSON to false
	I1009 12:38:34.882125    3616 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:34.882169    3616 notify.go:220] Checking for updates...
	I1009 12:38:34.882387    3616 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:34.882403    3616 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:34.882709    3616 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:34.882713    3616 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:34.882716    3616 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:34.883728    1686 retry.go:31] will retry after 1.392228091s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (79.334166ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:36.355493    3618 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:36.355716    3618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:36.355720    3618 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:36.355724    3618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:36.355877    3618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:36.356037    3618 out.go:352] Setting JSON to false
	I1009 12:38:36.356049    3618 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:36.356076    3618 notify.go:220] Checking for updates...
	I1009 12:38:36.356306    3618 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:36.356314    3618 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:36.356628    3618 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:36.356633    3618 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:36.356636    3618 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:36.357618    1686 retry.go:31] will retry after 2.299790259s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (77.6075ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:38.735026    3620 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:38.735260    3620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:38.735264    3620 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:38.735267    3620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:38.735445    3620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:38.735606    3620 out.go:352] Setting JSON to false
	I1009 12:38:38.735618    3620 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:38.735654    3620 notify.go:220] Checking for updates...
	I1009 12:38:38.735928    3620 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:38.735937    3620 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:38.736253    3620 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:38.736258    3620 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:38.736260    3620 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:38.737301    1686 retry.go:31] will retry after 2.702006435s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (78.596708ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:41.518057    3622 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:41.518289    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:41.518293    3622 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:41.518295    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:41.518464    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:41.518618    3622 out.go:352] Setting JSON to false
	I1009 12:38:41.518630    3622 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:41.518665    3622 notify.go:220] Checking for updates...
	I1009 12:38:41.518871    3622 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:41.518879    3622 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:41.519173    3622 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:41.519178    3622 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:41.519180    3622 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:41.520151    1686 retry.go:31] will retry after 6.988830856s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (77.134334ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:48.586112    3624 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:48.586352    3624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:48.586356    3624 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:48.586360    3624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:48.586524    3624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:48.586712    3624 out.go:352] Setting JSON to false
	I1009 12:38:48.586724    3624 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:48.586761    3624 notify.go:220] Checking for updates...
	I1009 12:38:48.587014    3624 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:48.587022    3624 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:48.587337    3624 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:48.587342    3624 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:48.587345    3624 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:48.588378    1686 retry.go:31] will retry after 7.18546202s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (78.550875ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:38:55.852523    3626 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:38:55.852745    3626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:55.852749    3626 out.go:358] Setting ErrFile to fd 2...
	I1009 12:38:55.852752    3626 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:38:55.852910    3626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:38:55.853069    3626 out.go:352] Setting JSON to false
	I1009 12:38:55.853081    3626 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:38:55.853139    3626 notify.go:220] Checking for updates...
	I1009 12:38:55.853353    3626 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:38:55.853361    3626 status.go:174] checking status of multinode-341000 ...
	I1009 12:38:55.853667    3626 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:38:55.853671    3626 status.go:384] host is not running, skipping remaining checks
	I1009 12:38:55.853673    3626 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:38:55.854661    1686 retry.go:31] will retry after 9.069782719s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (78.737292ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:05.003227    3628 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:05.003450    3628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:05.003454    3628 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:05.003457    3628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:05.003627    3628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:05.003782    3628 out.go:352] Setting JSON to false
	I1009 12:39:05.003798    3628 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:39:05.003836    3628 notify.go:220] Checking for updates...
	I1009 12:39:05.004058    3628 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:05.004067    3628 status.go:174] checking status of multinode-341000 ...
	I1009 12:39:05.004368    3628 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:39:05.004373    3628 status.go:384] host is not running, skipping remaining checks
	I1009 12:39:05.004375    3628 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 12:39:05.005345    1686 retry.go:31] will retry after 19.313674791s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr: exit status 7 (77.132375ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:24.395910    3630 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:24.396112    3630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:24.396116    3630 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:24.396119    3630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:24.396292    3630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:24.396438    3630 out.go:352] Setting JSON to false
	I1009 12:39:24.396450    3630 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:39:24.396495    3630 notify.go:220] Checking for updates...
	I1009 12:39:24.396695    3630 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:24.396703    3630 status.go:174] checking status of multinode-341000 ...
	I1009 12:39:24.397012    3630 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:39:24.397017    3630 status.go:384] host is not running, skipping remaining checks
	I1009 12:39:24.397019    3630 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-341000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (35.71125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-341000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-341000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-341000: (3.3846125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-341000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-341000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229768708s)

                                                
                                                
-- stdout --
	* [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	* Restarting existing qemu2 VM for "multinode-341000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-341000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:27.922443    3654 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:27.922653    3654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:27.922657    3654 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:27.922660    3654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:27.922840    3654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:27.924071    3654 out.go:352] Setting JSON to false
	I1009 12:39:27.944201    3654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4137,"bootTime":1728498630,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:39:27.944283    3654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:39:27.949086    3654 out.go:177] * [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:39:27.956060    3654 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:39:27.956118    3654 notify.go:220] Checking for updates...
	I1009 12:39:27.963016    3654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:39:27.966038    3654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:39:27.969093    3654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:39:27.972001    3654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:39:27.975078    3654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:39:27.978405    3654 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:27.978467    3654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:39:27.982995    3654 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:39:27.993640    3654 start.go:297] selected driver: qemu2
	I1009 12:39:27.993647    3654 start.go:901] validating driver "qemu2" against &{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:39:27.993703    3654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:39:27.996355    3654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:39:27.996388    3654 cni.go:84] Creating CNI manager for ""
	I1009 12:39:27.996410    3654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 12:39:27.996479    3654 start.go:340] cluster config:
	{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:39:28.001023    3654 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:39:28.009054    3654 out.go:177] * Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	I1009 12:39:28.012036    3654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:39:28.012053    3654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:39:28.012061    3654 cache.go:56] Caching tarball of preloaded images
	I1009 12:39:28.012138    3654 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:39:28.012144    3654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:39:28.012207    3654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/multinode-341000/config.json ...
	I1009 12:39:28.012637    3654 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:39:28.012688    3654 start.go:364] duration metric: took 44.333µs to acquireMachinesLock for "multinode-341000"
	I1009 12:39:28.012697    3654 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:39:28.012701    3654 fix.go:54] fixHost starting: 
	I1009 12:39:28.012821    3654 fix.go:112] recreateIfNeeded on multinode-341000: state=Stopped err=<nil>
	W1009 12:39:28.012830    3654 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:39:28.017025    3654 out.go:177] * Restarting existing qemu2 VM for "multinode-341000" ...
	I1009 12:39:28.024033    3654 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:39:28.024075    3654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:c1:94:e1:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:39:28.026450    3654 main.go:141] libmachine: STDOUT: 
	I1009 12:39:28.026470    3654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:39:28.026502    3654 fix.go:56] duration metric: took 13.799ms for fixHost
	I1009 12:39:28.026506    3654 start.go:83] releasing machines lock for "multinode-341000", held for 13.813709ms
	W1009 12:39:28.026513    3654 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:39:28.026561    3654 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:39:28.026566    3654 start.go:729] Will try again in 5 seconds ...
	I1009 12:39:33.028623    3654 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:39:33.029009    3654 start.go:364] duration metric: took 296.25µs to acquireMachinesLock for "multinode-341000"
	I1009 12:39:33.029124    3654 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:39:33.029141    3654 fix.go:54] fixHost starting: 
	I1009 12:39:33.029821    3654 fix.go:112] recreateIfNeeded on multinode-341000: state=Stopped err=<nil>
	W1009 12:39:33.029847    3654 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:39:33.034240    3654 out.go:177] * Restarting existing qemu2 VM for "multinode-341000" ...
	I1009 12:39:33.038231    3654 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:39:33.038440    3654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:c1:94:e1:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:39:33.048067    3654 main.go:141] libmachine: STDOUT: 
	I1009 12:39:33.048126    3654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:39:33.048197    3654 fix.go:56] duration metric: took 19.057458ms for fixHost
	I1009 12:39:33.048211    3654 start.go:83] releasing machines lock for "multinode-341000", held for 19.183875ms
	W1009 12:39:33.048398    3654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:39:33.056158    3654 out.go:201] 
	W1009 12:39:33.060292    3654 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:39:33.060318    3654 out.go:270] * 
	* 
	W1009 12:39:33.063024    3654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:39:33.070207    3654 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-341000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-341000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (36.2185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 node delete m03: exit status 83 (43.233959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-341000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-341000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-341000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr: exit status 7 (34.309417ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:33.272729    3668 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:33.272918    3668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:33.272926    3668 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:33.272928    3668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:33.273054    3668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:33.273172    3668 out.go:352] Setting JSON to false
	I1009 12:39:33.273182    3668 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:39:33.273256    3668 notify.go:220] Checking for updates...
	I1009 12:39:33.273408    3668 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:33.273417    3668 status.go:174] checking status of multinode-341000 ...
	I1009 12:39:33.273665    3668 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:39:33.273669    3668 status.go:384] host is not running, skipping remaining checks
	I1009 12:39:33.273670    3668 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.425667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-341000 stop: (1.986878416s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status: exit status 7 (71.005791ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr: exit status 7 (35.22725ms)

                                                
                                                
-- stdout --
	multinode-341000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:35.399831    3686 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:35.400027    3686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:35.400031    3686 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:35.400033    3686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:35.400155    3686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:35.400272    3686 out.go:352] Setting JSON to false
	I1009 12:39:35.400282    3686 mustload.go:65] Loading cluster: multinode-341000
	I1009 12:39:35.400345    3686 notify.go:220] Checking for updates...
	I1009 12:39:35.400483    3686 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:35.400494    3686 status.go:174] checking status of multinode-341000 ...
	I1009 12:39:35.400731    3686 status.go:371] multinode-341000 host status = "Stopped" (err=<nil>)
	I1009 12:39:35.400734    3686 status.go:384] host is not running, skipping remaining checks
	I1009 12:39:35.400736    3686 status.go:176] multinode-341000 status: &{Name:multinode-341000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr": multinode-341000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-341000 status --alsologtostderr": multinode-341000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (34.225583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-341000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-341000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.194990792s)

                                                
                                                
-- stdout --
	* [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	* Restarting existing qemu2 VM for "multinode-341000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-341000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:39:35.467504    3690 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:39:35.467653    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:35.467657    3690 out.go:358] Setting ErrFile to fd 2...
	I1009 12:39:35.467659    3690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:39:35.467793    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:39:35.468844    3690 out.go:352] Setting JSON to false
	I1009 12:39:35.486531    3690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4145,"bootTime":1728498630,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:39:35.486599    3690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:39:35.491242    3690 out.go:177] * [multinode-341000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:39:35.506234    3690 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:39:35.506245    3690 notify.go:220] Checking for updates...
	I1009 12:39:35.511318    3690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:39:35.514171    3690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:39:35.517210    3690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:39:35.520193    3690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:39:35.523130    3690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:39:35.526559    3690 config.go:182] Loaded profile config "multinode-341000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:39:35.526833    3690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:39:35.531162    3690 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:39:35.538134    3690 start.go:297] selected driver: qemu2
	I1009 12:39:35.538142    3690 start.go:901] validating driver "qemu2" against &{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:39:35.538207    3690 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:39:35.540753    3690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:39:35.540780    3690 cni.go:84] Creating CNI manager for ""
	I1009 12:39:35.540799    3690 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 12:39:35.540845    3690 start.go:340] cluster config:
	{Name:multinode-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-341000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:39:35.545255    3690 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:39:35.553165    3690 out.go:177] * Starting "multinode-341000" primary control-plane node in "multinode-341000" cluster
	I1009 12:39:35.557200    3690 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:39:35.557217    3690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:39:35.557230    3690 cache.go:56] Caching tarball of preloaded images
	I1009 12:39:35.557292    3690 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:39:35.557299    3690 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:39:35.557351    3690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/multinode-341000/config.json ...
	I1009 12:39:35.557696    3690 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:39:35.557727    3690 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "multinode-341000"
	I1009 12:39:35.557736    3690 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:39:35.557741    3690 fix.go:54] fixHost starting: 
	I1009 12:39:35.557861    3690 fix.go:112] recreateIfNeeded on multinode-341000: state=Stopped err=<nil>
	W1009 12:39:35.557869    3690 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:39:35.562120    3690 out.go:177] * Restarting existing qemu2 VM for "multinode-341000" ...
	I1009 12:39:35.570102    3690 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:39:35.570146    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:c1:94:e1:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:39:35.572423    3690 main.go:141] libmachine: STDOUT: 
	I1009 12:39:35.572442    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:39:35.572472    3690 fix.go:56] duration metric: took 14.729375ms for fixHost
	I1009 12:39:35.572477    3690 start.go:83] releasing machines lock for "multinode-341000", held for 14.745917ms
	W1009 12:39:35.572484    3690 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:39:35.572519    3690 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:39:35.572524    3690 start.go:729] Will try again in 5 seconds ...
	I1009 12:39:40.574553    3690 start.go:360] acquireMachinesLock for multinode-341000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:39:40.574871    3690 start.go:364] duration metric: took 248.709µs to acquireMachinesLock for "multinode-341000"
	I1009 12:39:40.574974    3690 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:39:40.574993    3690 fix.go:54] fixHost starting: 
	I1009 12:39:40.575631    3690 fix.go:112] recreateIfNeeded on multinode-341000: state=Stopped err=<nil>
	W1009 12:39:40.575658    3690 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:39:40.580957    3690 out.go:177] * Restarting existing qemu2 VM for "multinode-341000" ...
	I1009 12:39:40.588818    3690 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:39:40.588982    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:c1:94:e1:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/multinode-341000/disk.qcow2
	I1009 12:39:40.598608    3690 main.go:141] libmachine: STDOUT: 
	I1009 12:39:40.598661    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:39:40.598724    3690 fix.go:56] duration metric: took 23.732875ms for fixHost
	I1009 12:39:40.598744    3690 start.go:83] releasing machines lock for "multinode-341000", held for 23.848541ms
	W1009 12:39:40.598956    3690 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-341000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:39:40.604079    3690 out.go:201] 
	W1009 12:39:40.607897    3690 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:39:40.607920    3690 out.go:270] * 
	* 
	W1009 12:39:40.610801    3690 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:39:40.617933    3690 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-341000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (70.90075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-341000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-341000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-341000-m01 --driver=qemu2 : exit status 80 (9.8690255s)

                                                
                                                
-- stdout --
	* [multinode-341000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-341000-m01" primary control-plane node in "multinode-341000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-341000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-341000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-341000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-341000-m02 --driver=qemu2 : exit status 80 (10.011909375s)

                                                
                                                
-- stdout --
	* [multinode-341000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-341000-m02" primary control-plane node in "multinode-341000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-341000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-341000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-341000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-341000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-341000: exit status 83 (80.217917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-341000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-341000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-341000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-341000 -n multinode-341000: exit status 7 (33.893041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (10.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.927610416s)

                                                
                                                
-- stdout --
	* [test-preload-394000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-394000" primary control-plane node in "test-preload-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:40:00.953745    3747 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:40:00.953901    3747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:00.953905    3747 out.go:358] Setting ErrFile to fd 2...
	I1009 12:40:00.953907    3747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:00.954026    3747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:40:00.955160    3747 out.go:352] Setting JSON to false
	I1009 12:40:00.972802    3747 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4170,"bootTime":1728498630,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:40:00.972867    3747 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:40:00.978488    3747 out.go:177] * [test-preload-394000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:40:00.986436    3747 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:40:00.986483    3747 notify.go:220] Checking for updates...
	I1009 12:40:00.993488    3747 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:40:00.996483    3747 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:40:00.999437    3747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:40:01.002580    3747 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:40:01.005396    3747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:40:01.008797    3747 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:40:01.008855    3747 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:40:01.013480    3747 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:40:01.020467    3747 start.go:297] selected driver: qemu2
	I1009 12:40:01.020475    3747 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:40:01.020481    3747 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:40:01.023007    3747 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:40:01.026488    3747 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:40:01.029545    3747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:40:01.029561    3747 cni.go:84] Creating CNI manager for ""
	I1009 12:40:01.029585    3747 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:40:01.029589    3747 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:40:01.029624    3747 start.go:340] cluster config:
	{Name:test-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:40:01.034236    3747 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.042264    3747 out.go:177] * Starting "test-preload-394000" primary control-plane node in "test-preload-394000" cluster
	I1009 12:40:01.046503    3747 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1009 12:40:01.046566    3747 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/test-preload-394000/config.json ...
	I1009 12:40:01.046582    3747 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/test-preload-394000/config.json: {Name:mk25afc0eebf57b9c3ee60bd7b1132d75f1465bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:40:01.046594    3747 cache.go:107] acquiring lock: {Name:mk25e2e0eee4eb3d0e5a38063d8e8e0bca63e62c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046594    3747 cache.go:107] acquiring lock: {Name:mk9d1ee5a56a6b9738f48c1b1954c36696fbf1ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046598    3747 cache.go:107] acquiring lock: {Name:mk1720a8b49d870da4c47feaeb082c5d05ed62c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046620    3747 cache.go:107] acquiring lock: {Name:mk70f363e9330869eac0a991d578747c8dc7d6bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046801    3747 cache.go:107] acquiring lock: {Name:mka013f69c9b60fd9c25cba6aeebd4cefbfb8b5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046846    3747 cache.go:107] acquiring lock: {Name:mka42c3d9fc6fedf11abacc380fb306d23316d3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046922    3747 cache.go:107] acquiring lock: {Name:mk12e4cff261e70b86f69704f388cb7cc1cd6ccf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046942    3747 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1009 12:40:01.046918    3747 cache.go:107] acquiring lock: {Name:mkeb302f4d938a102b83610e99697c1306b0f4e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:01.046968    3747 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1009 12:40:01.047026    3747 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1009 12:40:01.047063    3747 start.go:360] acquireMachinesLock for test-preload-394000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:01.047208    3747 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:40:01.047288    3747 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1009 12:40:01.047369    3747 start.go:364] duration metric: took 289.833µs to acquireMachinesLock for "test-preload-394000"
	I1009 12:40:01.047377    3747 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1009 12:40:01.047397    3747 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:40:01.047383    3747 start.go:93] Provisioning new machine with config: &{Name:test-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:01.047435    3747 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:01.047483    3747 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:40:01.055459    3747 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:40:01.059525    3747 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1009 12:40:01.059561    3747 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1009 12:40:01.059522    3747 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1009 12:40:01.059564    3747 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1009 12:40:01.059521    3747 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:40:01.059575    3747 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1009 12:40:01.059570    3747 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:40:01.059606    3747 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:40:01.074015    3747 start.go:159] libmachine.API.Create for "test-preload-394000" (driver="qemu2")
	I1009 12:40:01.074041    3747 client.go:168] LocalClient.Create starting
	I1009 12:40:01.074103    3747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:01.074141    3747 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:01.074151    3747 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:01.074187    3747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:01.074223    3747 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:01.074242    3747 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:01.074550    3747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:01.218907    3747 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:01.359080    3747 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:01.359096    3747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:01.359284    3747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:01.369603    3747 main.go:141] libmachine: STDOUT: 
	I1009 12:40:01.369640    3747 main.go:141] libmachine: STDERR: 
	I1009 12:40:01.369738    3747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2 +20000M
	I1009 12:40:01.379892    3747 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:01.379925    3747 main.go:141] libmachine: STDERR: 
	I1009 12:40:01.379937    3747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:01.379941    3747 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:01.379957    3747 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:01.379982    3747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e5:3e:e8:b3:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:01.381974    3747 main.go:141] libmachine: STDOUT: 
	I1009 12:40:01.381990    3747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:01.382007    3747 client.go:171] duration metric: took 307.970625ms to LocalClient.Create
	I1009 12:40:01.704673    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1009 12:40:01.728295    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1009 12:40:01.738150    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1009 12:40:01.863030    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1009 12:40:01.863049    3747 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 816.1595ms
	I1009 12:40:01.863057    3747 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1009 12:40:01.898032    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1009 12:40:01.917422    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W1009 12:40:02.015320    3747 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1009 12:40:02.015394    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1009 12:40:02.163085    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1009 12:40:02.331628    3747 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 12:40:02.331720    3747 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 12:40:02.806888    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 12:40:02.806941    3747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.760397083s
	I1009 12:40:02.806968    3747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 12:40:03.382261    3747 start.go:128] duration metric: took 2.334843792s to createHost
	I1009 12:40:03.382328    3747 start.go:83] releasing machines lock for "test-preload-394000", held for 2.335013291s
	W1009 12:40:03.382394    3747 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:03.398300    3747 out.go:177] * Deleting "test-preload-394000" in qemu2 ...
	W1009 12:40:03.431070    3747 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:03.431103    3747 start.go:729] Will try again in 5 seconds ...
	I1009 12:40:04.640745    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1009 12:40:04.640803    3747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.594130375s
	I1009 12:40:04.640837    3747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1009 12:40:04.738391    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1009 12:40:04.738434    3747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.691945333s
	I1009 12:40:04.738457    3747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1009 12:40:05.858767    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1009 12:40:05.858828    3747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.81236725s
	I1009 12:40:05.858859    3747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1009 12:40:06.476340    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1009 12:40:06.476424    3747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.429954125s
	I1009 12:40:06.476453    3747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1009 12:40:06.763677    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1009 12:40:06.763732    3747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.71698375s
	I1009 12:40:06.763774    3747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1009 12:40:08.431510    3747 start.go:360] acquireMachinesLock for test-preload-394000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:08.432000    3747 start.go:364] duration metric: took 419.167µs to acquireMachinesLock for "test-preload-394000"
	I1009 12:40:08.432133    3747 start.go:93] Provisioning new machine with config: &{Name:test-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:08.432374    3747 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:08.441988    3747 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:40:08.491418    3747 start.go:159] libmachine.API.Create for "test-preload-394000" (driver="qemu2")
	I1009 12:40:08.491461    3747 client.go:168] LocalClient.Create starting
	I1009 12:40:08.491583    3747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:08.491666    3747 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:08.491689    3747 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:08.491755    3747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:08.491812    3747 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:08.491827    3747 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:08.492387    3747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:08.653220    3747 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:08.778912    3747 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:08.778918    3747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:08.779106    3747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:08.789046    3747 main.go:141] libmachine: STDOUT: 
	I1009 12:40:08.789063    3747 main.go:141] libmachine: STDERR: 
	I1009 12:40:08.789125    3747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2 +20000M
	I1009 12:40:08.797774    3747 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:08.797789    3747 main.go:141] libmachine: STDERR: 
	I1009 12:40:08.797807    3747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:08.797811    3747 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:08.797827    3747 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:08.797857    3747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:df:bb:07:0c:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/test-preload-394000/disk.qcow2
	I1009 12:40:08.799743    3747 main.go:141] libmachine: STDOUT: 
	I1009 12:40:08.799756    3747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:08.799770    3747 client.go:171] duration metric: took 308.313541ms to LocalClient.Create
	I1009 12:40:10.567681    3747 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1009 12:40:10.567741    3747 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.521226s
	I1009 12:40:10.567768    3747 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1009 12:40:10.567809    3747 cache.go:87] Successfully saved all images to host disk.
	I1009 12:40:10.801936    3747 start.go:128] duration metric: took 2.369603s to createHost
	I1009 12:40:10.802031    3747 start.go:83] releasing machines lock for "test-preload-394000", held for 2.370071208s
	W1009 12:40:10.802322    3747 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:10.813878    3747 out.go:201] 
	W1009 12:40:10.817999    3747 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:40:10.818024    3747 out.go:270] * 
	* 
	W1009 12:40:10.820489    3747 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:40:10.831952    3747 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-09 12:40:10.852718 -0700 PDT m=+3276.715755501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-394000 -n test-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-394000 -n test-preload-394000: exit status 7 (70.659333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-394000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-394000
--- FAIL: TestPreload (10.08s)

                                                
                                    
x
+
TestScheduledStopUnix (10.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-470000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-470000 --memory=2048 --driver=qemu2 : exit status 80 (10.010012208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-470000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-470000" primary control-plane node in "scheduled-stop-470000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-470000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-470000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-470000" primary control-plane node in "scheduled-stop-470000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-470000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-09 12:40:21.012901 -0700 PDT m=+3286.876228918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-470000 -n scheduled-stop-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-470000 -n scheduled-stop-470000: exit status 7 (73.946333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-470000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-470000
--- FAIL: TestScheduledStopUnix (10.16s)

                                                
                                    
x
+
TestSkaffold (12.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3686470570 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3686470570 version: (1.021700084s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-993000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-993000 --memory=2600 --driver=qemu2 : exit status 80 (9.994727292s)

                                                
                                                
-- stdout --
	* [skaffold-993000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-993000" primary control-plane node in "skaffold-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-993000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-993000" primary control-plane node in "skaffold-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-09 12:40:33.925774 -0700 PDT m=+3299.789471335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-993000 -n skaffold-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-993000 -n skaffold-993000: exit status 7 (66.16825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-993000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-993000
--- FAIL: TestSkaffold (12.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (626.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025666055 start -p running-upgrade-763000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4025666055 start -p running-upgrade-763000 --memory=2200 --vm-driver=qemu2 : (1m14.336836583s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1009 12:46:39.157897    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
E1009 12:46:56.053349    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m36.626116s)

                                                
                                                
-- stdout --
	* [running-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-763000" primary control-plane node in "running-upgrade-763000" cluster
	* Updating the running qemu2 "running-upgrade-763000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:42:10.435886    4056 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:42:10.436082    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:10.436086    4056 out.go:358] Setting ErrFile to fd 2...
	I1009 12:42:10.436088    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:10.436205    4056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:42:10.437195    4056 out.go:352] Setting JSON to false
	I1009 12:42:10.455531    4056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4300,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:42:10.455657    4056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:42:10.460135    4056 out.go:177] * [running-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:42:10.468121    4056 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:42:10.468220    4056 notify.go:220] Checking for updates...
	I1009 12:42:10.476072    4056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:10.480044    4056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:42:10.483027    4056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:42:10.486053    4056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:42:10.489080    4056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:42:10.492285    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:10.495012    4056 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 12:42:10.498076    4056 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:42:10.502037    4056 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:42:10.509058    4056 start.go:297] selected driver: qemu2
	I1009 12:42:10.509066    4056 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:10.509112    4056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:42:10.512061    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:42:10.512092    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:10.512117    4056 start.go:340] cluster config:
	{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:10.512181    4056 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:42:10.520063    4056 out.go:177] * Starting "running-upgrade-763000" primary control-plane node in "running-upgrade-763000" cluster
	I1009 12:42:10.524060    4056 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:10.524072    4056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1009 12:42:10.524077    4056 cache.go:56] Caching tarball of preloaded images
	I1009 12:42:10.524125    4056 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:42:10.524129    4056 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1009 12:42:10.524174    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/config.json ...
	I1009 12:42:10.524522    4056 start.go:360] acquireMachinesLock for running-upgrade-763000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:42:22.161126    4056 start.go:364] duration metric: took 11.636869s to acquireMachinesLock for "running-upgrade-763000"
	I1009 12:42:22.161174    4056 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:42:22.161181    4056 fix.go:54] fixHost starting: 
	I1009 12:42:22.162081    4056 fix.go:112] recreateIfNeeded on running-upgrade-763000: state=Running err=<nil>
	W1009 12:42:22.162093    4056 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:42:22.169232    4056 out.go:177] * Updating the running qemu2 "running-upgrade-763000" VM ...
	I1009 12:42:22.173092    4056 machine.go:93] provisionDockerMachine start ...
	I1009 12:42:22.173145    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.173270    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.173274    4056 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 12:42:22.234313    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I1009 12:42:22.234330    4056 buildroot.go:166] provisioning hostname "running-upgrade-763000"
	I1009 12:42:22.234375    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.234495    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.234501    4056 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-763000 && echo "running-upgrade-763000" | sudo tee /etc/hostname
	I1009 12:42:22.299631    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I1009 12:42:22.299707    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.299824    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.299833    4056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-763000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-763000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-763000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 12:42:22.374079    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:22.374096    4056 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 12:42:22.374106    4056 buildroot.go:174] setting up certificates
	I1009 12:42:22.374124    4056 provision.go:84] configureAuth start
	I1009 12:42:22.374132    4056 provision.go:143] copyHostCerts
	I1009 12:42:22.374204    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem, removing ...
	I1009 12:42:22.374211    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem
	I1009 12:42:22.374320    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 12:42:22.374491    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem, removing ...
	I1009 12:42:22.374495    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem
	I1009 12:42:22.374539    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 12:42:22.374649    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem, removing ...
	I1009 12:42:22.374652    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem
	I1009 12:42:22.374693    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 12:42:22.374791    4056 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-763000 san=[127.0.0.1 localhost minikube running-upgrade-763000]
	I1009 12:42:22.456781    4056 provision.go:177] copyRemoteCerts
	I1009 12:42:22.456943    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 12:42:22.456962    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.490551    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 12:42:22.497823    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 12:42:22.505783    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 12:42:22.516484    4056 provision.go:87] duration metric: took 142.35775ms to configureAuth
	I1009 12:42:22.516497    4056 buildroot.go:189] setting minikube options for container-runtime
	I1009 12:42:22.516641    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:22.516689    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.516775    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.516781    4056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 12:42:22.580595    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 12:42:22.580606    4056 buildroot.go:70] root file system type: tmpfs
	I1009 12:42:22.580680    4056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 12:42:22.580754    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.580879    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.580913    4056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 12:42:22.646106    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 12:42:22.646180    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.646298    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.646308    4056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 12:42:22.711149    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:22.711161    4056 machine.go:96] duration metric: took 538.078167ms to provisionDockerMachine
	I1009 12:42:22.711167    4056 start.go:293] postStartSetup for "running-upgrade-763000" (driver="qemu2")
	I1009 12:42:22.711179    4056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 12:42:22.711218    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 12:42:22.711228    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.743717    4056 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 12:42:22.745013    4056 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 12:42:22.745021    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 12:42:22.745096    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 12:42:22.745185    4056 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem -> 16862.pem in /etc/ssl/certs
	I1009 12:42:22.745288    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 12:42:22.747904    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:22.754615    4056 start.go:296] duration metric: took 43.443959ms for postStartSetup
	I1009 12:42:22.754631    4056 fix.go:56] duration metric: took 593.468459ms for fixHost
	I1009 12:42:22.754674    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.754788    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.754793    4056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 12:42:22.820401    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728502943.081851208
	
	I1009 12:42:22.820412    4056 fix.go:216] guest clock: 1728502943.081851208
	I1009 12:42:22.820417    4056 fix.go:229] Guest: 2024-10-09 12:42:23.081851208 -0700 PDT Remote: 2024-10-09 12:42:22.754632 -0700 PDT m=+12.344032710 (delta=327.219208ms)
	I1009 12:42:22.820430    4056 fix.go:200] guest clock delta is within tolerance: 327.219208ms
	I1009 12:42:22.820433    4056 start.go:83] releasing machines lock for "running-upgrade-763000", held for 659.307334ms
	I1009 12:42:22.820519    4056 ssh_runner.go:195] Run: cat /version.json
	I1009 12:42:22.820529    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.820520    4056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 12:42:22.820560    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	W1009 12:42:22.821169    4056 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53683: connect: connection refused
	I1009 12:42:22.821194    4056 retry.go:31] will retry after 174.832429ms: dial tcp [::1]:53683: connect: connection refused
	W1009 12:42:22.853324    4056 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 12:42:22.853382    4056 ssh_runner.go:195] Run: systemctl --version
	I1009 12:42:22.855272    4056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 12:42:22.856874    4056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 12:42:22.856908    4056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1009 12:42:22.861369    4056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1009 12:42:22.870919    4056 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 12:42:22.870935    4056 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.871006    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.877608    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1009 12:42:22.889061    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 12:42:22.892695    4056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.892757    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 12:42:22.895874    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.898614    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 12:42:22.901688    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.905171    4056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 12:42:22.909367    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 12:42:22.913950    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 12:42:22.918019    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 12:42:22.926790    4056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 12:42:22.931170    4056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 12:42:22.935317    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:23.063260    4056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 12:42:23.077757    4056 start.go:495] detecting cgroup driver to use...
	I1009 12:42:23.077857    4056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 12:42:23.086641    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:23.137747    4056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 12:42:23.150479    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:23.156202    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:23.160871    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:23.166575    4056 ssh_runner.go:195] Run: which cri-dockerd
	I1009 12:42:23.167977    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 12:42:23.171001    4056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 12:42:23.176197    4056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 12:42:23.299765    4056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 12:42:23.414555    4056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 12:42:23.414688    4056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 12:42:23.424788    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:23.521398    4056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:25.791190    4056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.269840958s)
	I1009 12:42:25.791281    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1009 12:42:25.796926    4056 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1009 12:42:25.805807    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:25.811050    4056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 12:42:25.905420    4056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 12:42:26.000139    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.091057    4056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 12:42:26.098440    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:26.104484    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.192095    4056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1009 12:42:26.241537    4056 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 12:42:26.241651    4056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 12:42:26.244367    4056 start.go:563] Will wait 60s for crictl version
	I1009 12:42:26.244439    4056 ssh_runner.go:195] Run: which crictl
	I1009 12:42:26.246168    4056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 12:42:26.260011    4056 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1009 12:42:26.260092    4056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:26.275351    4056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:26.299801    4056 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1009 12:42:26.299903    4056 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1009 12:42:26.301552    4056 kubeadm.go:883] updating cluster {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1009 12:42:26.301606    4056 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:26.301662    4056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:26.314357    4056 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:26.314365    4056 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:26.314440    4056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:26.318359    4056 ssh_runner.go:195] Run: which lz4
	I1009 12:42:26.320155    4056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 12:42:26.321675    4056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 12:42:26.321696    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1009 12:42:27.260257    4056 docker.go:649] duration metric: took 940.188792ms to copy over tarball
	I1009 12:42:27.260330    4056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 12:42:28.368217    4056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.107904167s)
	I1009 12:42:28.368238    4056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 12:42:28.386031    4056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:28.389183    4056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1009 12:42:28.394313    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:28.481429    4056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:29.032655    4056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:29.053077    4056 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:29.053088    4056 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:29.053092    4056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 12:42:29.057400    4056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:29.060050    4056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.062399    4056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.062476    4056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:29.065652    4056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.066218    4056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.068702    4056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.068743    4056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.071977    4056 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1009 12:42:29.072053    4056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.074908    4056 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.075088    4056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.077276    4056 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1009 12:42:29.077379    4056 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.078362    4056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.079132    4056 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.533125    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.544363    4056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1009 12:42:29.544403    4056 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.544465    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.555342    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1009 12:42:29.565966    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.576942    4056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1009 12:42:29.577021    4056 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.577071    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.589878    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1009 12:42:29.598788    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.610552    4056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1009 12:42:29.610596    4056 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.610659    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.621387    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1009 12:42:29.630507    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.642261    4056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1009 12:42:29.642285    4056 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.642351    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.655212    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1009 12:42:29.717199    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1009 12:42:29.728487    4056 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1009 12:42:29.728518    4056 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1009 12:42:29.728584    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1009 12:42:29.739677    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1009 12:42:29.739813    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.741479    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1009 12:42:29.741490    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1009 12:42:29.749428    4056 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.749440    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1009 12:42:29.776353    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1009 12:42:29.802811    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.813787    4056 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1009 12:42:29.813812    4056 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.813880    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.825213    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1009 12:42:29.825339    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:29.826880    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1009 12:42:29.826898    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W1009 12:42:29.848006    4056 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:29.848158    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.909717    4056 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1009 12:42:29.909744    4056 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.909808    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.940648    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1009 12:42:29.940776    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:29.953752    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1009 12:42:29.953779    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1009 12:42:30.047180    4056 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:30.047198    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1009 12:42:30.150463    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1009 12:42:30.150484    4056 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:30.150489    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1009 12:42:30.292051    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W1009 12:42:31.810874    4056 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:31.812278    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.849914    4056 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 12:42:31.849963    4056 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.850107    4056 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.869597    4056 cache_images.go:92] duration metric: took 2.816573166s to LoadCachedImages
	W1009 12:42:31.869662    4056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1009 12:42:31.869672    4056 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1009 12:42:31.869746    4056 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-763000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 12:42:31.869832    4056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 12:42:31.887427    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:42:31.887439    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:31.887445    4056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 12:42:31.887454    4056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-763000 NodeName:running-upgrade-763000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 12:42:31.887531    4056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-763000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 12:42:31.887612    4056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1009 12:42:31.890679    4056 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 12:42:31.890718    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 12:42:31.893784    4056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1009 12:42:31.898865    4056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 12:42:31.903846    4056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1009 12:42:31.909599    4056 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1009 12:42:31.910833    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:31.990409    4056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:42:31.996694    4056 certs.go:68] Setting up /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000 for IP: 10.0.2.15
	I1009 12:42:31.996705    4056 certs.go:194] generating shared ca certs ...
	I1009 12:42:31.996715    4056 certs.go:226] acquiring lock for ca certs: {Name:mkbf858b3b2074a12d126c3a2fed20f98f420e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:31.997051    4056 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key
	I1009 12:42:31.997283    4056 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key
	I1009 12:42:31.997290    4056 certs.go:256] generating profile certs ...
	I1009 12:42:31.997537    4056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key
	I1009 12:42:31.997552    4056 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee
	I1009 12:42:31.997560    4056 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1009 12:42:32.077281    4056 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee ...
	I1009 12:42:32.077293    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee: {Name:mk01607440c75d660555c30ff5d21966b49fe6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.077574    4056 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee ...
	I1009 12:42:32.077580    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee: {Name:mk2f700d3fcca1f4332e1fcf937d6867d9e88c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.077752    4056 certs.go:381] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt
	I1009 12:42:32.077875    4056 certs.go:385] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key
	I1009 12:42:32.078131    4056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.key
	I1009 12:42:32.078295    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem (1338 bytes)
	W1009 12:42:32.078433    4056 certs.go:480] ignoring /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686_empty.pem, impossibly tiny 0 bytes
	I1009 12:42:32.078441    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 12:42:32.078462    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem (1078 bytes)
	I1009 12:42:32.078483    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem (1123 bytes)
	I1009 12:42:32.078500    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem (1679 bytes)
	I1009 12:42:32.078545    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:32.080011    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 12:42:32.088808    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 12:42:32.097017    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 12:42:32.105233    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 12:42:32.112960    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 12:42:32.120202    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 12:42:32.128147    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 12:42:32.136106    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 12:42:32.144381    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /usr/share/ca-certificates/16862.pem (1708 bytes)
	I1009 12:42:32.152296    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 12:42:32.159552    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem --> /usr/share/ca-certificates/1686.pem (1338 bytes)
	I1009 12:42:32.167230    4056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 12:42:32.172708    4056 ssh_runner.go:195] Run: openssl version
	I1009 12:42:32.174805    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16862.pem && ln -fs /usr/share/ca-certificates/16862.pem /etc/ssl/certs/16862.pem"
	I1009 12:42:32.178193    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.179558    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:49 /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.179596    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.181714    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16862.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 12:42:32.184516    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 12:42:32.188090    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.189741    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.189775    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.191665    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 12:42:32.195025    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1686.pem && ln -fs /usr/share/ca-certificates/1686.pem /etc/ssl/certs/1686.pem"
	I1009 12:42:32.198214    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.199772    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:49 /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.199806    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.201842    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1686.pem /etc/ssl/certs/51391683.0"
	I1009 12:42:32.204877    4056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 12:42:32.206780    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 12:42:32.208659    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 12:42:32.210615    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 12:42:32.212832    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 12:42:32.215858    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 12:42:32.217931    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 12:42:32.219849    4056 kubeadm.go:392] StartCluster: {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:32.219928    4056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.231199    4056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 12:42:32.234976    4056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 12:42:32.234983    4056 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 12:42:32.235025    4056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 12:42:32.238254    4056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.239532    4056 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-763000" does not appear in /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:32.239593    4056 kubeconfig.go:62] /Users/jenkins/minikube-integration/19780-1164/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-763000" cluster setting kubeconfig missing "running-upgrade-763000" context setting]
	I1009 12:42:32.239984    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.240755    4056 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c0f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:42:32.246001    4056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 12:42:32.249273    4056 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-763000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1009 12:42:32.249280    4056 kubeadm.go:1160] stopping kube-system containers ...
	I1009 12:42:32.249336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.260852    4056 docker.go:483] Stopping containers: [60b710e3ac8d 0fe6dcae56d3 ec3f65181026 5ae86cb0a43f 292e40c297f5 6ad25cea7b79 ef6bd7897f53 ae0a291f9f06 557a401ad1a9 301a37b51d64 6c7a674ad960 120043bae0b5 21acea369545 a29f202107da f9e43d160ee4 a2ea44b2098d]
	I1009 12:42:32.260926    4056 ssh_runner.go:195] Run: docker stop 60b710e3ac8d 0fe6dcae56d3 ec3f65181026 5ae86cb0a43f 292e40c297f5 6ad25cea7b79 ef6bd7897f53 ae0a291f9f06 557a401ad1a9 301a37b51d64 6c7a674ad960 120043bae0b5 21acea369545 a29f202107da f9e43d160ee4 a2ea44b2098d
	I1009 12:42:32.276031    4056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 12:42:32.369076    4056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:42:32.373868    4056 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  9 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  9 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  9 19:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  9 19:41 /etc/kubernetes/scheduler.conf
	
	I1009 12:42:32.373920    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf
	I1009 12:42:32.377199    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.377237    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:42:32.380821    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf
	I1009 12:42:32.384658    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.384710    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:42:32.388265    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.391893    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.391937    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.395140    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf
	I1009 12:42:32.398271    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.398312    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:42:32.401470    4056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:42:32.404829    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:32.430239    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.236856    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.507086    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.531953    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.557030    4056 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:42:33.557126    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.059513    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.559272    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.564181    4056 api_server.go:72] duration metric: took 1.007177833s to wait for apiserver process to appear ...
	I1009 12:42:34.564195    4056 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:42:34.564215    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:39.566477    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:39.566534    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:44.566770    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:44.566826    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:49.567145    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:49.567173    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:54.567605    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:54.567641    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:59.568274    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:59.568327    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:04.569367    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:04.569413    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:09.571027    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:09.571075    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:14.572307    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:14.572352    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:19.574593    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:19.574672    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:24.577113    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:24.577158    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:29.578165    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:29.578215    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:34.580397    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:34.580976    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:34.594927    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:34.595029    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:34.607007    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:34.607101    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:34.617912    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:34.617998    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:34.633607    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:34.633690    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:34.644230    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:34.644310    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:34.657150    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:34.657234    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:34.667357    4056 logs.go:282] 0 containers: []
	W1009 12:43:34.667368    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:34.667427    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:34.685594    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:34.685612    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:34.685617    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:34.700268    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:34.700280    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:34.714601    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:34.714612    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:34.741317    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:34.741328    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:34.754959    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:34.754970    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:34.870393    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:34.870403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:34.882246    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:34.882256    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:34.896061    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:34.896074    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:34.913726    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:34.913738    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:34.921615    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:34.921625    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:34.937497    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:34.937508    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:34.949470    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:34.949480    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:34.993492    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:34.993504    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:35.004903    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:35.004913    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:35.022848    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:35.022859    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:35.039533    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:35.039543    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:35.054606    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:35.054618    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:35.067289    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:35.067303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:37.580336    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:42.580693    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:42.580964    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:42.601403    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:42.601513    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:42.619702    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:42.619788    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:42.631540    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:42.631622    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:42.643493    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:42.643582    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:42.654290    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:42.654369    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:42.665186    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:42.665262    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:42.674854    4056 logs.go:282] 0 containers: []
	W1009 12:43:42.674867    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:42.674933    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:42.689585    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:42.689599    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:42.689604    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:42.694635    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:42.694642    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:42.720230    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:42.720237    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:42.744137    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:42.744147    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:42.755519    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:42.755530    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:42.770339    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:42.770353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:42.781726    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:42.781737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:42.793555    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:42.793570    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:42.805819    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:42.805828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:42.843184    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:42.843195    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:42.857872    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:42.857884    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:42.869340    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:42.869353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:42.883038    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:42.883048    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:42.895039    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:42.895052    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:42.912461    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:42.912471    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:42.952467    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:42.952474    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:42.967261    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:42.967272    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:42.984713    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:42.984728    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:45.498248    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:50.500490    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:50.500821    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:50.527168    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:50.527315    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:50.544597    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:50.544686    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:50.557820    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:50.557913    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:50.575772    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:50.575858    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:50.586754    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:50.586826    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:50.597374    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:50.597470    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:50.607887    4056 logs.go:282] 0 containers: []
	W1009 12:43:50.607897    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:50.607963    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:50.618747    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:50.618772    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:50.618778    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:50.633149    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:50.633158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:50.645666    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:50.645678    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:50.657290    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:50.657303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:50.674982    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:50.674993    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:50.713431    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:50.713447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:50.735455    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:50.735467    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:50.746335    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:50.746346    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:50.764019    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:50.764033    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:50.779996    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:50.780009    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:50.791702    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:50.791712    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:50.795720    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:50.795729    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:50.821661    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:50.821671    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:50.832921    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:50.832940    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:50.844168    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:50.844180    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:50.858346    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:50.858359    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:50.870462    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:50.870473    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:50.886110    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:50.886121    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:53.431077    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:58.433730    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:58.434306    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:58.485629    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:58.485764    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:58.515523    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:58.515616    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:58.528918    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:58.529001    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:58.539292    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:58.539374    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:58.550121    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:58.550206    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:58.561028    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:58.561113    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:58.571619    4056 logs.go:282] 0 containers: []
	W1009 12:43:58.571629    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:58.571700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:58.582812    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:58.582830    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:58.582837    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:58.594653    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:58.594666    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:58.606835    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:58.606847    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:58.632931    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:58.632946    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:58.644674    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:58.644686    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:58.649081    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:58.649087    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:58.660219    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:58.660229    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:58.677898    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:58.677909    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:58.692678    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:58.692688    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:58.734850    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:58.734857    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:58.748107    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:58.748117    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:58.763761    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:58.763771    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:58.781628    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:58.781636    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:58.793577    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:58.793593    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:58.837020    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:58.837032    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:58.850774    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:58.850784    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:58.867218    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:58.867235    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:58.880026    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:58.880042    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:01.398674    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:06.401479    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:06.402026    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:06.439615    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:06.439776    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:06.461533    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:06.461670    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:06.484338    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:06.484423    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:06.498574    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:06.498658    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:06.509381    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:06.509454    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:06.520139    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:06.520221    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:06.530627    4056 logs.go:282] 0 containers: []
	W1009 12:44:06.530644    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:06.530716    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:06.541410    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:06.541430    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:06.541435    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:06.553799    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:06.553813    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:06.571986    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:06.571997    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:06.615058    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:06.615069    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:06.619396    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:06.619403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:06.633640    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:06.633651    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:06.646049    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:06.646063    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:06.662695    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:06.662705    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:06.675479    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:06.675491    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:06.700443    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:06.700453    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:06.716011    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:06.716021    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:06.727742    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:06.727753    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:06.739833    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:06.739846    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:06.767096    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:06.767106    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:06.780675    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:06.780685    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:06.820246    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:06.820260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:06.835715    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:06.835726    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:06.850804    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:06.850814    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:09.365332    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:14.367722    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:14.368281    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:14.407904    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:14.408066    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:14.430174    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:14.430309    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:14.445791    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:14.445879    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:14.458139    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:14.458227    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:14.471519    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:14.472536    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:14.483706    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:14.483789    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:14.493873    4056 logs.go:282] 0 containers: []
	W1009 12:44:14.493884    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:14.493946    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:14.511461    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:14.511476    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:14.511483    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:14.516126    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:14.516134    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:14.530527    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:14.530541    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:14.541996    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:14.542009    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:14.557072    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:14.557083    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:14.569235    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:14.569247    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:14.588271    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:14.588283    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:14.601425    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:14.601438    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:14.614220    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:14.614236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:14.627506    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:14.627519    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:14.640186    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:14.640198    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:14.680800    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:14.680816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:14.701170    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:14.701186    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:14.720251    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:14.720259    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:14.747802    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:14.747813    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:14.792980    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:14.792991    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:14.808559    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:14.808576    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:14.820953    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:14.820966    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:17.346845    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:22.349525    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:22.350172    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:22.391755    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:22.391922    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:22.413727    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:22.413845    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:22.429191    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:22.429282    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:22.441653    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:22.441744    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:22.452474    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:22.452546    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:22.464087    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:22.464170    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:22.475689    4056 logs.go:282] 0 containers: []
	W1009 12:44:22.475703    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:22.475774    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:22.486744    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:22.486758    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:22.486763    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:22.528450    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:22.528464    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:22.570034    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:22.570045    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:22.585794    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:22.585810    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:22.599173    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:22.599185    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:22.613842    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:22.613857    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:22.629475    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:22.629487    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:22.634684    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:22.634696    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:22.646767    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:22.646779    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:22.663098    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:22.663109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:22.676726    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:22.676737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:22.689302    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:22.689316    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:22.709814    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:22.709829    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:22.737273    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:22.737285    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:22.767244    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:22.767260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:22.790049    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:22.790064    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:22.813193    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:22.813201    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:22.827878    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:22.827890    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:25.343272    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:30.344292    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:30.344873    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:30.385018    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:30.385186    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:30.406185    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:30.406296    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:30.422341    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:30.422429    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:30.435623    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:30.435701    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:30.448218    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:30.448302    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:30.459980    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:30.460086    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:30.473364    4056 logs.go:282] 0 containers: []
	W1009 12:44:30.473375    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:30.473445    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:30.485296    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:30.485314    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:30.485319    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:30.498397    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:30.498408    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:30.513206    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:30.513217    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:30.528474    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:30.528489    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:30.541960    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:30.541971    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:30.554380    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:30.554391    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:30.566758    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:30.566770    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:30.579012    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:30.579024    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:30.591659    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:30.591672    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:30.603774    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:30.603785    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:30.625974    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:30.625987    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:30.644710    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:30.644724    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:30.672151    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:30.672163    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:30.677301    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:30.677309    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:30.721133    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:30.721148    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:30.734477    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:30.734489    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:30.750691    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:30.750705    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:30.795794    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:30.795815    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:33.322816    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:38.325344    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:38.325528    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:38.340535    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:38.340581    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:38.356589    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:38.356681    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:38.369347    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:38.369387    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:38.380692    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:38.380774    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:38.392440    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:38.392484    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:38.408100    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:38.408184    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:38.418843    4056 logs.go:282] 0 containers: []
	W1009 12:44:38.418857    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:38.418931    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:38.429925    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:38.429940    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:38.429949    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:38.446321    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:38.446330    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:38.464755    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:38.464766    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:38.476954    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:38.476967    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:38.522357    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:38.522366    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:38.527455    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:38.527462    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:38.546502    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:38.546513    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:38.558220    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:38.558230    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:38.573358    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:38.573374    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:38.585915    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:38.585927    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:38.604118    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:38.604131    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:38.619646    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:38.619655    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:38.632881    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:38.632890    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:38.645822    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:38.645834    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:38.672662    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:38.672686    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:38.710921    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:38.710934    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:38.725990    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:38.726005    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:38.739052    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:38.739064    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:41.259188    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:46.261557    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:46.261856    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:46.287063    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:46.287175    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:46.306528    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:46.306617    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:46.319878    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:46.319957    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:46.331796    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:46.331872    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:46.343595    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:46.343668    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:46.355211    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:46.355290    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:46.366577    4056 logs.go:282] 0 containers: []
	W1009 12:44:46.366586    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:46.366625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:46.377714    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:46.377729    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:46.377736    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:46.416501    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:46.416515    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:46.428150    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:46.428162    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:46.442175    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:46.442192    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:46.457311    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:46.457327    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:46.471128    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:46.471143    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:46.488519    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:46.488531    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:46.500579    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:46.500591    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:46.512845    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:46.512857    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:46.540004    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:46.540019    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:46.585331    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:46.585342    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:46.598371    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:46.598381    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:46.603108    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:46.603118    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:46.617866    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:46.617877    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:46.634723    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:46.634736    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:46.650401    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:46.650409    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:46.669222    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:46.669235    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:46.687864    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:46.687876    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:49.203512    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:54.205619    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:54.205722    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:54.218746    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:54.218838    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:54.230548    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:54.230630    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:54.242163    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:54.242247    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:54.253725    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:54.253810    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:54.265239    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:54.265317    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:54.277319    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:54.277407    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:54.288393    4056 logs.go:282] 0 containers: []
	W1009 12:44:54.288406    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:54.288475    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:54.300362    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:54.300379    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:54.300385    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:54.305274    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:54.305285    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:54.317943    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:54.317955    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:54.334018    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:54.334033    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:54.345679    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:54.345694    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:54.372119    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:54.372130    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:54.384572    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:54.384584    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:54.396865    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:54.396877    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:54.414782    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:54.414794    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:54.460806    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:54.460817    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:54.500342    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:54.500356    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:54.514895    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:54.514906    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:54.532595    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:54.532603    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:54.546644    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:54.546652    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:54.562833    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:54.562843    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:54.574723    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:54.574737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:54.589342    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:54.589354    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:54.601953    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:54.601965    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:57.122393    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:02.124408    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:02.124569    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:02.137132    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:02.137237    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:02.150795    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:02.150836    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:02.162220    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:02.162269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:02.174182    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:02.174224    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:02.185590    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:02.185669    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:02.197835    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:02.197910    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:02.209210    4056 logs.go:282] 0 containers: []
	W1009 12:45:02.209223    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:02.209286    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:02.220722    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:02.220737    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:02.220743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:02.240255    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:02.240263    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:02.252651    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:02.252665    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:02.277636    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:02.277654    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:02.290781    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:02.290793    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:02.296108    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:02.296117    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:02.307780    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:02.307788    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:02.323017    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:02.323028    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:02.336353    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:02.336365    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:02.352729    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:02.352741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:02.370914    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:02.370926    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:02.383386    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:02.383397    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:02.401353    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:02.401364    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:02.451057    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:02.451071    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:02.493275    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:02.493288    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:02.506110    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:02.506121    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:02.520292    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:02.520303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:02.532724    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:02.532734    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:05.046470    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:10.048892    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:10.049142    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:10.071132    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:10.071245    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:10.088655    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:10.088752    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:10.102960    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:10.103042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:10.114666    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:10.114750    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:10.127164    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:10.127245    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:10.143018    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:10.143118    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:10.155428    4056 logs.go:282] 0 containers: []
	W1009 12:45:10.155442    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:10.155517    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:10.166714    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:10.166729    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:10.166735    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:10.171730    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:10.171741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:10.188872    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:10.188881    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:10.208564    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:10.208574    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:10.221047    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:10.221059    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:10.239946    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:10.239964    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:10.253108    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:10.253119    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:10.273926    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:10.273936    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:10.319568    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:10.319589    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:10.356728    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:10.356740    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:10.371497    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:10.371505    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:10.386992    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:10.387000    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:10.398744    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:10.398756    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:10.412045    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:10.412057    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:10.430220    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:10.430231    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:10.449980    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:10.449998    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:10.468903    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:10.468917    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:10.486627    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:10.486638    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:13.012618    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:18.015088    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:18.015407    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:18.042172    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:18.042282    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:18.060575    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:18.060663    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:18.074367    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:18.074477    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:18.087271    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:18.087358    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:18.099017    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:18.099102    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:18.111189    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:18.111277    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:18.122281    4056 logs.go:282] 0 containers: []
	W1009 12:45:18.122315    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:18.122398    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:18.135171    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:18.135188    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:18.135195    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:18.141540    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:18.141551    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:18.161971    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:18.161983    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:18.186843    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:18.186853    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:18.233709    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:18.233723    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:18.272977    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:18.272993    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:18.285926    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:18.285939    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:18.301929    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:18.301943    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:18.321549    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:18.321562    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:18.336850    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:18.336860    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:18.354733    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:18.354746    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:18.373220    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:18.373231    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:18.385781    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:18.385792    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:18.397786    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:18.397798    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:18.413203    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:18.413220    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:18.429385    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:18.429396    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:18.441739    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:18.441750    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:18.454317    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:18.454331    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:20.969557    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:25.971871    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:25.972121    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:26.002832    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:26.002954    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:26.022814    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:26.022925    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:26.037506    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:26.037583    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:26.049704    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:26.049785    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:26.061227    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:26.061309    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:26.077110    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:26.077202    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:26.088191    4056 logs.go:282] 0 containers: []
	W1009 12:45:26.088203    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:26.088269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:26.101576    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:26.101626    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:26.101636    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:26.114182    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:26.114193    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:26.126876    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:26.126889    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:26.143294    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:26.143308    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:26.156130    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:26.156142    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:26.181983    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:26.181993    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:26.195418    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:26.195429    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:26.208342    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:26.208353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:26.223142    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:26.223152    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:26.268120    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:26.268132    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:26.306471    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:26.306483    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:26.322225    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:26.322236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:26.335440    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:26.335452    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:26.353723    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:26.353733    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:26.373065    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:26.373079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:26.389004    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:26.389014    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:26.401644    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:26.401652    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:26.406849    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:26.406860    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:28.924189    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:33.926439    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:33.926546    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:33.941141    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:33.941227    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:33.955926    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:33.955996    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:33.969568    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:33.969651    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:33.981131    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:33.981221    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:33.997239    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:33.997301    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:34.008255    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:34.008336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:34.020687    4056 logs.go:282] 0 containers: []
	W1009 12:45:34.020695    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:34.020764    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:34.031858    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:34.031874    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:34.031881    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:34.036291    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:34.036299    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:34.048634    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:34.048646    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:34.068429    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:34.068444    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:34.085246    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:34.085257    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:34.104223    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:34.104236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:34.116580    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:34.116594    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:34.130375    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:34.130389    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:34.169685    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:34.169697    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:34.184416    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:34.184425    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:34.196745    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:34.196757    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:34.225953    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:34.225963    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:34.245632    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:34.245641    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:34.257859    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:34.257870    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:34.282817    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:34.282828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:34.326252    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:34.326267    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:34.341242    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:34.341255    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:34.353035    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:34.353050    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:36.872662    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:41.875050    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:41.875156    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:41.887279    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:41.887369    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:41.898672    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:41.898765    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:41.912037    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:41.912117    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:41.922945    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:41.922987    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:41.934426    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:41.934461    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:41.945379    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:41.945462    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:41.956984    4056 logs.go:282] 0 containers: []
	W1009 12:45:41.956997    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:41.957071    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:41.968458    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:41.968476    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:41.968482    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:41.984083    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:41.984094    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:41.996697    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:41.996708    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:42.009269    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:42.009280    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:42.056231    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:42.056243    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:42.096504    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:42.096517    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:42.108423    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:42.108438    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:42.128611    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:42.128624    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:42.141325    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:42.141338    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:42.156816    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:42.156827    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:42.172251    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:42.172267    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:42.196536    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:42.196547    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:42.209667    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:42.209678    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:42.214741    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:42.214751    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:42.231587    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:42.231599    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:42.243805    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:42.243816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:42.262082    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:42.262093    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:42.273936    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:42.273947    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:44.787640    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:49.788490    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:49.788591    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:49.800583    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:49.800628    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:49.812491    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:49.812536    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:49.824213    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:49.824265    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:49.835218    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:49.835272    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:49.846823    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:49.846900    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:49.860232    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:49.860315    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:49.871951    4056 logs.go:282] 0 containers: []
	W1009 12:45:49.871964    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:49.872034    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:49.883025    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:49.883040    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:49.883045    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:49.904843    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:49.904855    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:49.917087    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:49.917098    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:49.929405    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:49.929414    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:49.947787    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:49.947800    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:49.952848    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:49.952860    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:49.990855    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:49.990866    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:50.005394    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:50.005403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:50.020876    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:50.020887    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:50.064451    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:50.064472    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:50.082994    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:50.083011    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:50.108306    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:50.108323    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:50.122264    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:50.122277    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:50.137400    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:50.137412    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:50.149902    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:50.149918    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:50.161471    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:50.161482    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:50.176298    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:50.176307    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:50.191410    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:50.191421    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:52.704063    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:57.706179    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:57.706267    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:57.718090    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:57.718175    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:57.729722    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:57.729806    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:57.740812    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:57.740890    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:57.751667    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:57.751746    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:57.762963    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:57.763043    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:57.774978    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:57.775059    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:57.786677    4056 logs.go:282] 0 containers: []
	W1009 12:45:57.786690    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:57.786760    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:57.797696    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:57.797709    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:57.797714    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:57.835932    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:57.835944    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:57.855731    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:57.855739    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:57.860253    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:57.860264    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:57.871968    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:57.871976    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:57.883891    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:57.883903    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:57.902370    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:57.902385    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:57.915420    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:57.915431    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:57.928660    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:57.928669    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:57.943414    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:57.943424    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:57.958991    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:57.959002    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:57.971097    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:57.971109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:57.987064    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:57.987075    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:57.999621    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:57.999633    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:58.022535    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:58.022543    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:58.064268    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:58.064280    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:58.078628    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:58.078638    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:58.091081    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:58.091091    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:00.607388    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:05.609724    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:05.609817    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:05.624255    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:05.624308    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:05.636254    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:05.636298    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:05.652119    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:05.652176    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:05.668391    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:05.668477    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:05.680133    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:05.680218    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:05.692199    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:05.692283    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:05.707808    4056 logs.go:282] 0 containers: []
	W1009 12:46:05.707819    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:05.707888    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:05.719499    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:05.719516    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:05.719521    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:05.763887    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:05.763899    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:05.778145    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:05.778155    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:05.790812    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:05.790825    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:05.811263    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:05.811275    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:05.823989    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:05.824006    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:05.861636    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:05.861649    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:05.874391    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:05.874408    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:05.890302    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:05.890315    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:05.903135    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:05.903147    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:05.915667    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:05.915680    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:05.920536    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:05.920546    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:05.935725    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:05.935739    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:05.951030    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:05.951041    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:05.966817    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:05.966830    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:05.978915    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:05.978925    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:05.991670    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:05.991680    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:06.009721    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:06.009732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:08.535155    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:13.537528    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:13.537705    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:13.555951    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:13.556046    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:13.569870    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:13.569953    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:13.582607    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:13.582684    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:13.594460    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:13.594529    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:13.605918    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:13.605989    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:13.619900    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:13.619986    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:13.631166    4056 logs.go:282] 0 containers: []
	W1009 12:46:13.631176    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:13.631258    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:13.644305    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:13.644320    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:13.644326    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:13.690743    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:13.690762    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:13.730906    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:13.730923    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:13.746068    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:13.746079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:13.758225    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:13.758238    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:13.773243    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:13.773260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:13.786023    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:13.786034    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:13.804364    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:13.804375    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:13.824634    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:13.824649    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:13.829650    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:13.829657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:13.842486    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:13.842497    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:13.866399    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:13.866416    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:13.880668    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:13.880680    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:13.893250    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:13.893261    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:13.909201    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:13.909213    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:13.924698    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:13.924708    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:13.942922    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:13.942935    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:13.954662    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:13.954672    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:16.472053    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:21.474440    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:21.474757    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:21.499463    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:21.499574    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:21.516421    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:21.516511    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:21.529745    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:21.529824    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:21.541822    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:21.541907    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:21.553405    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:21.553486    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:21.566552    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:21.566637    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:21.577714    4056 logs.go:282] 0 containers: []
	W1009 12:46:21.577725    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:21.577801    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:21.592297    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:21.592311    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:21.592317    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:21.616382    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:21.616394    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:21.628799    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:21.628812    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:21.649007    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:21.649020    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:21.669008    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:21.669018    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:21.681866    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:21.681876    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:21.686980    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:21.686992    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:21.699678    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:21.699692    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:21.713125    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:21.713136    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:21.739123    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:21.739143    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:21.772221    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:21.772240    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:21.824403    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:21.824415    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:21.836481    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:21.836494    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:21.851822    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:21.851834    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:21.874796    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:21.874807    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:21.888846    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:21.888859    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:21.905721    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:21.905738    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:21.917451    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:21.917463    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:24.460670    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:29.460909    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:29.460984    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:29.476397    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:29.476491    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:29.488591    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:29.488676    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:29.500625    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:29.500705    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:29.512688    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:29.512777    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:29.524078    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:29.524158    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:29.535393    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:29.535470    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:29.546646    4056 logs.go:282] 0 containers: []
	W1009 12:46:29.546667    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:29.546738    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:29.557853    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:29.557870    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:29.557876    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:29.581237    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:29.581247    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:29.625444    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:29.625465    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:29.630591    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:29.630604    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:29.645463    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:29.645474    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:29.660280    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:29.660293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:29.672546    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:29.672559    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:29.685196    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:29.685208    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:29.698450    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:29.698464    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:29.710509    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:29.710520    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:29.728462    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:29.728479    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:29.746715    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:29.746732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:29.759595    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:29.759607    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:29.797174    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:29.797182    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:29.815742    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:29.815755    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:29.834645    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:29.834657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:29.855263    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:29.855277    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:29.866525    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:29.866537    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:32.380041    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:37.382395    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:37.382487    4056 kubeadm.go:597] duration metric: took 4m5.154483542s to restartPrimaryControlPlane
	W1009 12:46:37.382547    4056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 12:46:37.382577    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1009 12:46:38.511080    4056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.128522959s)
	I1009 12:46:38.511130    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 12:46:38.516455    4056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:46:38.519886    4056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:46:38.522828    4056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:46:38.522834    4056 kubeadm.go:157] found existing configuration files:
	
	I1009 12:46:38.522869    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf
	I1009 12:46:38.525700    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:46:38.525733    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:46:38.529153    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf
	I1009 12:46:38.532315    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:46:38.532354    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:46:38.535159    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.537927    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:46:38.537954    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.541244    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf
	I1009 12:46:38.544382    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:46:38.544431    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:46:38.547353    4056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 12:46:38.564407    4056 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1009 12:46:38.564591    4056 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 12:46:38.617587    4056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 12:46:38.617641    4056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 12:46:38.617698    4056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 12:46:38.673987    4056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 12:46:38.683088    4056 out.go:235]   - Generating certificates and keys ...
	I1009 12:46:38.683169    4056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 12:46:38.683265    4056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 12:46:38.683383    4056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 12:46:38.683471    4056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 12:46:38.683565    4056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 12:46:38.683602    4056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 12:46:38.683642    4056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 12:46:38.683678    4056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 12:46:38.683777    4056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 12:46:38.683822    4056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 12:46:38.683863    4056 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 12:46:38.683898    4056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 12:46:38.842235    4056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 12:46:39.158174    4056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 12:46:39.269993    4056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 12:46:39.353931    4056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 12:46:39.385941    4056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 12:46:39.386329    4056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 12:46:39.386457    4056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 12:46:39.478883    4056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 12:46:39.482719    4056 out.go:235]   - Booting up control plane ...
	I1009 12:46:39.482834    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 12:46:39.482931    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 12:46:39.487369    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 12:46:39.487630    4056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 12:46:39.488506    4056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 12:46:43.991806    4056 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503329 seconds
	I1009 12:46:43.991910    4056 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 12:46:43.995870    4056 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 12:46:44.505032    4056 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 12:46:44.505134    4056 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-763000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 12:46:45.009117    4056 kubeadm.go:310] [bootstrap-token] Using token: o94btb.71bdwp2j2jh2bto7
	I1009 12:46:45.014477    4056 out.go:235]   - Configuring RBAC rules ...
	I1009 12:46:45.014553    4056 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 12:46:45.014605    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 12:46:45.016879    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 12:46:45.017952    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 12:46:45.018823    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 12:46:45.019799    4056 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 12:46:45.022755    4056 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 12:46:45.226027    4056 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 12:46:45.413317    4056 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 12:46:45.413691    4056 kubeadm.go:310] 
	I1009 12:46:45.413727    4056 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 12:46:45.413734    4056 kubeadm.go:310] 
	I1009 12:46:45.413779    4056 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 12:46:45.413812    4056 kubeadm.go:310] 
	I1009 12:46:45.413832    4056 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 12:46:45.413858    4056 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 12:46:45.413883    4056 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 12:46:45.413886    4056 kubeadm.go:310] 
	I1009 12:46:45.413911    4056 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 12:46:45.413960    4056 kubeadm.go:310] 
	I1009 12:46:45.414064    4056 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 12:46:45.414072    4056 kubeadm.go:310] 
	I1009 12:46:45.414098    4056 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 12:46:45.414168    4056 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 12:46:45.414202    4056 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 12:46:45.414204    4056 kubeadm.go:310] 
	I1009 12:46:45.414246    4056 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 12:46:45.414292    4056 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 12:46:45.414295    4056 kubeadm.go:310] 
	I1009 12:46:45.414335    4056 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o94btb.71bdwp2j2jh2bto7 \
	I1009 12:46:45.414395    4056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e \
	I1009 12:46:45.414406    4056 kubeadm.go:310] 	--control-plane 
	I1009 12:46:45.414409    4056 kubeadm.go:310] 
	I1009 12:46:45.414446    4056 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 12:46:45.414450    4056 kubeadm.go:310] 
	I1009 12:46:45.414497    4056 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o94btb.71bdwp2j2jh2bto7 \
	I1009 12:46:45.414557    4056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e 
	I1009 12:46:45.414733    4056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 12:46:45.414743    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:46:45.414752    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:46:45.419317    4056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 12:46:45.426541    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 12:46:45.430048    4056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 12:46:45.435682    4056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 12:46:45.435778    4056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 12:46:45.435817    4056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-763000 minikube.k8s.io/updated_at=2024_10_09T12_46_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=running-upgrade-763000 minikube.k8s.io/primary=true
	I1009 12:46:45.483664    4056 kubeadm.go:1113] duration metric: took 47.963333ms to wait for elevateKubeSystemPrivileges
	I1009 12:46:45.483674    4056 ops.go:34] apiserver oom_adj: -16
	I1009 12:46:45.483683    4056 kubeadm.go:394] duration metric: took 4m13.271076583s to StartCluster
	I1009 12:46:45.483695    4056 settings.go:142] acquiring lock: {Name:mk60ce4ac2055fafaa579c122d2ddfc9feae1fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.483797    4056 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:46:45.484214    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.484705    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:46:45.484870    4056 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:46:45.485210    4056 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 12:46:45.485356    4056 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-763000"
	I1009 12:46:45.485364    4056 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-763000"
	W1009 12:46:45.485368    4056 addons.go:243] addon storage-provisioner should already be in state true
	I1009 12:46:45.485380    4056 host.go:66] Checking if "running-upgrade-763000" exists ...
	I1009 12:46:45.485363    4056 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-763000"
	I1009 12:46:45.485445    4056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-763000"
	I1009 12:46:45.486543    4056 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c0f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:46:45.486687    4056 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-763000"
	W1009 12:46:45.486692    4056 addons.go:243] addon default-storageclass should already be in state true
	I1009 12:46:45.486704    4056 host.go:66] Checking if "running-upgrade-763000" exists ...
	I1009 12:46:45.488291    4056 out.go:177] * Verifying Kubernetes components...
	I1009 12:46:45.488808    4056 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.492349    4056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 12:46:45.492364    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:46:45.496231    4056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:46:45.500386    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:46:45.504285    4056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:45.504294    4056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 12:46:45.504303    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:46:45.596806    4056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:46:45.603232    4056 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:46:45.603311    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:46:45.607803    4056 api_server.go:72] duration metric: took 122.92075ms to wait for apiserver process to appear ...
	I1009 12:46:45.607814    4056 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:46:45.607824    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:45.634308    4056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.649397    4056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:45.970308    4056 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 12:46:45.970320    4056 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 12:46:50.608705    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:50.608727    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:55.609616    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:55.609652    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:00.609788    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:00.609810    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:05.610026    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:05.610093    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:10.610422    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:10.610461    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:15.611401    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:15.611441    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1009 12:47:15.972656    4056 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1009 12:47:15.976836    4056 out.go:177] * Enabled addons: storage-provisioner
	I1009 12:47:15.988525    4056 addons.go:510] duration metric: took 30.504522542s for enable addons: enabled=[storage-provisioner]
	I1009 12:47:20.612256    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:20.612331    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:25.613583    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:25.613609    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:30.614933    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:30.614978    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:35.616807    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:35.616844    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:40.616935    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:40.616965    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:45.619055    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:45.619186    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:45.640930    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:47:45.641021    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:45.659849    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:47:45.659942    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:45.671804    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:47:45.671894    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:45.682164    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:47:45.682240    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:45.693085    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:47:45.693169    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:45.710470    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:47:45.710549    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:45.721082    4056 logs.go:282] 0 containers: []
	W1009 12:47:45.721096    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:45.721153    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:45.736547    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:47:45.736566    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:47:45.736571    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:47:45.751082    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:47:45.751091    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:47:45.762104    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:47:45.762115    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:47:45.773533    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:47:45.773544    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:47:45.791956    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:47:45.791970    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:47:45.809040    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:47:45.809057    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:47:45.821582    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:45.821594    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:45.847050    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:45.847064    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:45.883358    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:45.883369    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:45.888462    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:45.888478    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:45.927096    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:47:45.927109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:47:45.942753    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:47:45.942765    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:47:45.955915    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:47:45.955928    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:48.470412    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:53.473073    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:53.473525    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:53.503756    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:47:53.503902    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:53.522085    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:47:53.522198    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:53.536167    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:47:53.536265    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:53.547875    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:47:53.547954    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:53.562321    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:47:53.562402    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:53.572569    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:47:53.572636    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:53.583887    4056 logs.go:282] 0 containers: []
	W1009 12:47:53.583900    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:53.583957    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:53.594598    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:47:53.594614    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:47:53.594619    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:47:53.608774    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:47:53.608785    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:47:53.623120    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:47:53.623131    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:47:53.636098    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:47:53.636109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:47:53.648432    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:47:53.648443    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:47:53.662073    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:47:53.662082    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:47:53.675287    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:53.675298    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:53.713570    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:53.713584    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:53.718674    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:47:53.718690    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:47:53.734519    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:47:53.734529    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:47:53.759326    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:53.759335    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:53.786352    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:47:53.786363    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:53.799075    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:53.799089    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:56.337879    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:01.340086    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:01.340373    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:01.363198    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:01.363308    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:01.378653    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:01.378746    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:01.391261    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:01.391337    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:01.402178    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:01.402263    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:01.412594    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:01.412668    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:01.422848    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:01.422932    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:01.433023    4056 logs.go:282] 0 containers: []
	W1009 12:48:01.433041    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:01.433114    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:01.443037    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:01.443053    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:01.443058    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:01.478125    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:01.478134    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:01.515841    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:01.515858    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:01.528060    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:01.528073    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:01.541193    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:01.541204    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:01.565101    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:01.565114    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:01.590978    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:01.590992    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:01.603585    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:01.603596    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:01.608968    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:01.608978    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:01.623857    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:01.623872    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:01.639085    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:01.639095    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:01.653964    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:01.653977    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:01.677893    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:01.677904    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:04.191947    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:09.194156    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:09.194433    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:09.215705    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:09.215809    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:09.230986    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:09.231077    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:09.244097    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:09.244179    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:09.254853    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:09.254936    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:09.265512    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:09.265597    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:09.276188    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:09.276269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:09.286544    4056 logs.go:282] 0 containers: []
	W1009 12:48:09.286556    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:09.286625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:09.298016    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:09.298030    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:09.298036    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:09.312125    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:09.312137    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:09.325332    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:09.325345    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:09.350557    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:09.350569    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:09.363344    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:09.363357    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:09.401625    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:09.401636    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:09.447984    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:09.447998    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:09.475343    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:09.475361    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:09.494389    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:09.494400    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:09.521955    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:09.521981    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:09.527416    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:09.527434    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:09.542146    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:09.542158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:09.561882    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:09.561899    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:12.077875    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:17.080017    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:17.080172    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:17.096437    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:17.096528    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:17.109064    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:17.109146    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:17.128605    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:17.128690    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:17.139317    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:17.139392    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:17.154406    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:17.154486    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:17.165403    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:17.165479    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:17.181732    4056 logs.go:282] 0 containers: []
	W1009 12:48:17.181746    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:17.181816    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:17.193368    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:17.193384    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:17.193389    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:17.206366    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:17.206378    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:17.218779    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:17.218791    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:17.238431    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:17.238447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:17.251458    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:17.251470    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:17.270702    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:17.270715    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:17.307428    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:17.307452    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:17.387448    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:17.387462    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:17.400637    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:17.400649    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:17.416874    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:17.416888    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:17.443790    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:17.443801    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:17.449348    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:17.449356    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:17.465351    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:17.465359    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:19.982008    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:24.984208    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:24.984427    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:24.998974    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:24.999064    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:25.018421    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:25.018491    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:25.029825    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:25.029899    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:25.041068    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:25.041156    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:25.053093    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:25.053168    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:25.064692    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:25.064767    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:25.082789    4056 logs.go:282] 0 containers: []
	W1009 12:48:25.082799    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:25.082832    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:25.094292    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:25.094306    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:25.094312    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:25.111006    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:25.111015    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:25.115834    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:25.115844    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:25.153805    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:25.153816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:25.169015    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:25.169027    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:25.184645    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:25.184655    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:25.199473    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:25.199486    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:25.215024    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:25.215038    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:25.244266    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:25.244282    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:25.280469    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:25.280487    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:25.293199    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:25.293210    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:25.308988    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:25.309000    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:25.321477    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:25.321489    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:27.849442    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:32.851755    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:32.851996    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:32.875468    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:32.875541    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:32.892303    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:32.892347    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:32.906452    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:32.906492    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:32.918344    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:32.918387    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:32.929840    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:32.929882    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:32.945648    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:32.945724    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:32.956923    4056 logs.go:282] 0 containers: []
	W1009 12:48:32.956935    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:32.957009    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:32.968657    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:32.968670    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:32.968675    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:32.981281    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:32.981293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:32.999258    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:32.999275    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:33.014121    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:33.014136    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:33.046705    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:33.046732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:33.061167    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:33.061178    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:33.066444    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:33.066454    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:33.106698    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:33.106709    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:33.122083    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:33.122095    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:33.137232    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:33.137243    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:33.156150    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:33.156161    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:33.193903    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:33.193913    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:33.208881    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:33.208894    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:35.723924    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:40.726013    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:40.726273    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:40.743185    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:40.743275    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:40.762844    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:40.762926    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:40.773898    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:40.773979    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:40.787481    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:40.787563    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:40.799424    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:40.799504    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:40.810960    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:40.811039    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:40.822184    4056 logs.go:282] 0 containers: []
	W1009 12:48:40.822195    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:40.822260    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:40.833661    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:40.833676    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:40.833682    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:40.871330    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:40.871341    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:40.884282    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:40.884296    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:40.900492    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:40.900509    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:40.913253    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:40.913265    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:40.931329    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:40.931346    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:40.944132    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:40.944144    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:40.949309    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:40.949321    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:40.988538    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:40.988551    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:41.003175    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:41.003189    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:41.017790    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:41.017801    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:41.030628    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:41.030640    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:41.043294    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:41.043305    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:43.569258    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:48.571428    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:48.571723    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:48.592857    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:48.592950    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:48.605835    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:48.605920    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:48.617937    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:48.618012    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:48.633591    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:48.633672    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:48.645223    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:48.645302    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:48.657922    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:48.658000    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:48.669675    4056 logs.go:282] 0 containers: []
	W1009 12:48:48.669688    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:48.669749    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:48.681124    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:48.681137    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:48.681142    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:48.697393    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:48.697410    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:48.709621    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:48.709635    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:48.734288    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:48.734301    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:48.749572    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:48.749586    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:48.762480    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:48.762492    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:48.801582    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:48.801595    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:48.816518    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:48.816526    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:48.829240    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:48.829251    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:48.850857    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:48.850874    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:48.864783    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:48.864796    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:48.877373    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:48.877387    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:48.912296    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:48.912308    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:51.419371    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:56.420923    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:56.421431    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:56.458788    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:56.458955    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:56.483898    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:56.484071    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:56.512857    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:56.512939    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:56.534795    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:56.534879    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:56.552516    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:56.552606    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:56.572324    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:56.572406    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:56.608584    4056 logs.go:282] 0 containers: []
	W1009 12:48:56.608598    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:56.608673    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:56.627507    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:56.627568    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:56.627616    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:56.720457    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:56.720470    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:56.747857    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:56.747875    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:56.762722    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:56.762734    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:56.779927    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:56.779938    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:56.798301    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:56.798313    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:56.836181    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:56.836202    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:56.841709    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:56.841723    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:56.856890    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:56.856902    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:56.872410    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:56.872424    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:56.885181    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:56.885194    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:56.904143    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:56.904157    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:56.917988    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:56.918003    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:59.432882    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:04.435072    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:04.435439    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:04.464255    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:04.464380    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:04.483503    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:04.483606    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:04.498208    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:04.498301    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:04.510414    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:04.510498    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:04.521206    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:04.521256    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:04.532680    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:04.532763    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:04.544000    4056 logs.go:282] 0 containers: []
	W1009 12:49:04.544014    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:04.544091    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:04.555810    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:04.555828    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:04.555834    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:04.561148    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:04.561158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:04.576564    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:04.576577    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:04.589337    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:04.589347    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:04.601839    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:04.601852    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:04.614072    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:04.614087    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:04.626716    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:04.626730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:04.639954    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:04.639965    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:04.677791    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:04.677807    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:04.693596    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:04.693604    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:04.714874    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:04.714887    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:04.753499    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:04.753516    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:04.765153    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:04.765168    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:04.781423    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:04.781435    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:04.793801    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:04.793813    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:07.322730    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:12.324981    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:12.325134    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:12.339843    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:12.339934    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:12.352038    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:12.352123    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:12.363254    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:12.363336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:12.374762    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:12.374833    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:12.387465    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:12.387541    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:12.399558    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:12.399625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:12.410726    4056 logs.go:282] 0 containers: []
	W1009 12:49:12.410735    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:12.410781    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:12.429774    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:12.429790    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:12.429796    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:12.468756    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:12.468769    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:12.496944    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:12.496956    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:12.509450    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:12.509465    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:12.535691    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:12.535706    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:12.551217    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:12.551232    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:12.564620    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:12.564632    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:12.570119    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:12.570130    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:12.589594    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:12.589605    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:12.601671    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:12.601683    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:12.617825    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:12.617841    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:12.632730    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:12.632743    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:12.670623    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:12.670635    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:12.687667    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:12.687679    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:12.700719    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:12.700731    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:15.216115    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:20.218374    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:20.218708    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:20.247114    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:20.247200    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:20.266402    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:20.266454    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:20.281376    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:20.281464    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:20.293614    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:20.293700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:20.305617    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:20.305695    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:20.317148    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:20.317229    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:20.328971    4056 logs.go:282] 0 containers: []
	W1009 12:49:20.328982    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:20.329055    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:20.340617    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:20.340637    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:20.340643    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:20.360770    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:20.360782    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:20.375257    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:20.375270    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:20.391586    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:20.391597    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:20.417168    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:20.417180    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:20.433780    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:20.433796    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:20.446383    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:20.446396    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:20.464249    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:20.464261    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:20.476893    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:20.476908    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:20.514491    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:20.514503    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:20.520125    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:20.520138    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:20.560068    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:20.560079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:20.573437    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:20.573451    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:20.589011    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:20.589027    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:20.611292    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:20.611301    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:23.125888    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:28.128120    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:28.128399    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:28.152043    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:28.152158    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:28.168623    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:28.168709    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:28.182033    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:28.182100    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:28.193496    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:28.193543    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:28.204535    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:28.204617    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:28.216674    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:28.216762    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:28.227446    4056 logs.go:282] 0 containers: []
	W1009 12:49:28.227471    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:28.227545    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:28.240321    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:28.240342    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:28.240348    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:28.257912    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:28.257921    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:28.275494    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:28.275504    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:28.287301    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:28.287312    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:28.324099    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:28.324123    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:28.339829    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:28.339841    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:28.352642    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:28.352657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:28.367735    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:28.367743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:28.380441    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:28.380450    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:28.405616    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:28.405629    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:28.421770    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:28.421783    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:28.426269    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:28.426276    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:28.462759    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:28.462771    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:28.479319    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:28.479331    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:28.499990    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:28.500002    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:31.017364    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:36.019619    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:36.019851    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:36.035894    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:36.035990    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:36.048468    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:36.048548    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:36.059485    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:36.059569    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:36.070577    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:36.070652    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:36.081678    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:36.081713    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:36.093630    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:36.093671    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:36.104227    4056 logs.go:282] 0 containers: []
	W1009 12:49:36.104240    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:36.104313    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:36.115691    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:36.115711    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:36.115718    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:36.153874    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:36.153886    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:36.167355    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:36.167368    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:36.180141    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:36.180154    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:36.193118    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:36.193132    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:36.219406    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:36.219415    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:36.257152    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:36.257178    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:36.270281    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:36.270293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:36.284675    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:36.284687    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:36.297340    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:36.297353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:36.313513    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:36.313523    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:36.319826    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:36.319839    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:36.335691    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:36.335706    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:36.348391    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:36.348404    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:36.366910    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:36.366920    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:38.881487    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:43.878906    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:43.879084    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:43.890828    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:43.890909    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:43.901869    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:43.901946    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:43.913035    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:43.913114    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:43.923335    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:43.923409    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:43.933973    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:43.934054    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:43.945880    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:43.945955    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:43.957790    4056 logs.go:282] 0 containers: []
	W1009 12:49:43.957808    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:43.957903    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:43.969680    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:43.969710    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:43.969717    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:44.009503    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:44.009519    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:44.025656    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:44.025666    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:44.040368    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:44.040379    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:44.066860    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:44.066874    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:44.080026    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:44.080042    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:44.092310    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:44.092326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:44.110392    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:44.110403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:44.129310    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:44.129326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:44.141854    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:44.141867    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:44.155447    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:44.155460    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:44.160799    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:44.160808    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:44.175185    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:44.175197    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:44.193961    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:44.193977    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:44.231712    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:44.231728    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:46.745112    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:51.744216    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:51.744390    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:51.758251    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:51.758335    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:51.769199    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:51.769279    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:51.779645    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:51.779718    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:51.790328    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:51.790394    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:51.800981    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:51.801060    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:51.812997    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:51.813082    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:51.823855    4056 logs.go:282] 0 containers: []
	W1009 12:49:51.823867    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:51.823936    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:51.835423    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:51.835442    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:51.835447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:51.850261    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:51.850271    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:51.868531    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:51.868545    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:51.906023    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:51.906037    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:51.910771    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:51.910783    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:51.928037    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:51.928048    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:51.939973    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:51.939988    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:51.952419    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:51.952431    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:51.990878    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:51.990889    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:52.005382    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:52.005393    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:52.018305    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:52.018319    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:52.034560    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:52.034578    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:52.047388    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:52.047399    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:52.073444    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:52.073453    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:52.085863    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:52.085875    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:54.600178    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:59.599729    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:59.600005    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:59.632521    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:59.632626    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:59.646925    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:59.647005    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:59.658646    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:59.658730    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:59.673881    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:59.673958    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:59.685150    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:59.685232    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:59.696739    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:59.696824    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:59.708833    4056 logs.go:282] 0 containers: []
	W1009 12:49:59.708848    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:59.708930    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:59.720732    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:59.720775    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:59.720784    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:59.734128    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:59.734139    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:59.753693    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:59.753711    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:59.780399    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:59.780410    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:59.818914    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:59.818926    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:59.832370    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:59.832381    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:59.845791    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:59.845802    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:59.886813    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:59.886828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:59.891624    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:59.891635    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:59.904181    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:59.904192    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:59.919647    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:59.919661    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:59.941200    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:59.941210    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:59.953993    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:59.954005    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:59.967732    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:59.967743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:59.982956    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:59.982969    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:02.496461    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:07.497876    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:07.498332    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:07.530195    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:07.530344    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:07.549179    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:07.549287    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:07.565041    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:07.565130    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:07.577820    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:07.577898    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:07.589275    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:07.589354    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:07.601895    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:07.601980    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:07.615137    4056 logs.go:282] 0 containers: []
	W1009 12:50:07.615151    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:07.615226    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:07.627101    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:07.627120    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:07.627127    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:07.632965    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:07.632974    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:07.681368    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:07.681381    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:07.698090    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:07.698101    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:07.710875    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:07.710886    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:07.747951    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:07.747969    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:07.762191    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:07.762203    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:07.777638    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:07.777654    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:07.796740    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:07.796756    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:07.814451    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:07.814461    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:07.826983    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:07.826995    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:07.842357    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:07.842370    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:07.855338    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:07.855348    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:07.867861    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:07.867873    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:07.893420    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:07.893440    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:10.406978    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:15.408326    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:15.408650    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:15.437248    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:15.437365    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:15.454944    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:15.455042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:15.469628    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:15.469714    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:15.487332    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:15.487411    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:15.498641    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:15.498725    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:15.510392    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:15.510467    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:15.521062    4056 logs.go:282] 0 containers: []
	W1009 12:50:15.521077    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:15.521142    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:15.532138    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:15.532152    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:15.532157    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:15.545407    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:15.545418    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:15.558059    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:15.558068    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:15.570439    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:15.570452    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:15.582915    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:15.582924    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:15.603215    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:15.603227    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:15.609142    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:15.609155    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:15.650796    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:15.650809    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:15.670918    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:15.670933    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:15.685895    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:15.685904    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:15.699020    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:15.699034    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:15.712001    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:15.712014    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:15.753648    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:15.753663    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:15.766317    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:15.766326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:15.787263    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:15.787276    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:18.314924    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:23.316783    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:23.317014    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:23.336328    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:23.336378    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:23.352817    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:23.352920    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:23.364292    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:23.364374    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:23.376035    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:23.376106    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:23.387007    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:23.387084    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:23.398048    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:23.398124    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:23.409492    4056 logs.go:282] 0 containers: []
	W1009 12:50:23.409503    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:23.409571    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:23.424349    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:23.424362    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:23.424368    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:23.437185    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:23.437195    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:23.475157    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:23.475171    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:23.491020    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:23.491036    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:23.503816    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:23.503831    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:23.516696    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:23.516710    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:23.530511    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:23.530522    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:23.545843    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:23.545852    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:23.550819    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:23.550830    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:23.590088    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:23.590101    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:23.605731    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:23.605750    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:23.618926    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:23.618938    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:23.639830    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:23.639842    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:23.652896    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:23.652908    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:23.677677    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:23.677687    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:26.191916    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:31.193866    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:31.194094    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:31.207773    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:31.207865    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:31.218716    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:31.218801    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:31.230591    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:31.230680    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:31.241611    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:31.241700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:31.252927    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:31.253007    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:31.265037    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:31.265116    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:31.279934    4056 logs.go:282] 0 containers: []
	W1009 12:50:31.279947    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:31.280018    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:31.292039    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:31.292057    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:31.292063    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:31.329960    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:31.329978    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:31.355384    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:31.355397    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:31.368716    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:31.368730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:31.406668    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:31.406681    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:31.418852    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:31.418863    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:31.431418    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:31.431429    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:31.458798    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:31.458811    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:31.475217    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:31.475228    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:31.490485    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:31.490502    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:31.503247    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:31.503262    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:31.519009    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:31.519023    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:31.531914    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:31.531929    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:31.544149    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:31.544159    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:31.549214    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:31.549226    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:34.064126    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:39.064959    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:39.065259    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:39.092052    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:39.092164    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:39.110007    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:39.110102    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:39.123292    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:39.123378    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:39.134856    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:39.134941    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:39.146712    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:39.146789    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:39.158371    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:39.158455    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:39.169957    4056 logs.go:282] 0 containers: []
	W1009 12:50:39.169971    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:39.170042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:39.181857    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:39.181874    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:39.181881    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:39.207320    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:39.207340    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:39.220728    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:39.220741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:39.234635    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:39.234647    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:39.246525    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:39.246533    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:39.262299    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:39.262314    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:39.277652    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:39.277668    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:39.291401    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:39.291413    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:39.305072    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:39.305085    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:39.321675    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:39.321688    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:39.340764    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:39.340783    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:39.379051    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:39.379063    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:39.385805    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:39.385816    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:39.427736    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:39.427749    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:39.440717    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:39.440730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:41.955145    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:46.957143    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:46.961573    4056 out.go:201] 
	W1009 12:50:46.965446    4056 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1009 12:50:46.965459    4056 out.go:270] * 
	* 
	W1009 12:50:46.966109    4056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:50:46.978547    4056 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-763000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-09 12:50:47.108352 -0700 PDT m=+3913.007130335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-763000 -n running-upgrade-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-763000 -n running-upgrade-763000: exit status 2 (15.6761425s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-763000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo cat                            | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo cat                            | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo cat                            | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo cat                            | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo                                | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo find                           | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-311000 sudo crio                           | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-311000                                     | cilium-311000             | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:40 PDT |
	| start   | -p kubernetes-upgrade-134000                         | kubernetes-upgrade-134000 | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-935000                             | offline-docker-935000     | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:40 PDT |
	| stop    | -p kubernetes-upgrade-134000                         | kubernetes-upgrade-134000 | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:40 PDT |
	| start   | -p stopped-upgrade-220000                            | minikube                  | jenkins | v1.26.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:41 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-134000                         | kubernetes-upgrade-134000 | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-134000                         | kubernetes-upgrade-134000 | jenkins | v1.34.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:40 PDT |
	| start   | -p running-upgrade-763000                            | minikube                  | jenkins | v1.26.0 | 09 Oct 24 12:40 PDT | 09 Oct 24 12:42 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-220000 stop                          | minikube                  | jenkins | v1.26.0 | 09 Oct 24 12:41 PDT | 09 Oct 24 12:42 PDT |
	| start   | -p stopped-upgrade-220000                            | stopped-upgrade-220000    | jenkins | v1.34.0 | 09 Oct 24 12:42 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-763000                            | running-upgrade-763000    | jenkins | v1.34.0 | 09 Oct 24 12:42 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 12:42:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 12:42:10.435886    4056 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:42:10.436082    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:10.436086    4056 out.go:358] Setting ErrFile to fd 2...
	I1009 12:42:10.436088    4056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:10.436205    4056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:42:10.437195    4056 out.go:352] Setting JSON to false
	I1009 12:42:10.455531    4056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4300,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:42:10.455657    4056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:42:10.460135    4056 out.go:177] * [running-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:42:10.468121    4056 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:42:10.468220    4056 notify.go:220] Checking for updates...
	I1009 12:42:10.476072    4056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:10.480044    4056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:42:10.483027    4056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:42:10.486053    4056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:42:10.489080    4056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:42:10.492285    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:10.495012    4056 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 12:42:10.498076    4056 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:42:10.502037    4056 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:42:10.509058    4056 start.go:297] selected driver: qemu2
	I1009 12:42:10.509066    4056 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:10.509112    4056 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:42:10.512061    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:42:10.512092    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:10.512117    4056 start.go:340] cluster config:
	{Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:10.512181    4056 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:42:10.520063    4056 out.go:177] * Starting "running-upgrade-763000" primary control-plane node in "running-upgrade-763000" cluster
	I1009 12:42:10.524060    4056 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:10.524072    4056 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1009 12:42:10.524077    4056 cache.go:56] Caching tarball of preloaded images
	I1009 12:42:10.524125    4056 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:42:10.524129    4056 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1009 12:42:10.524174    4056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/config.json ...
	I1009 12:42:10.524522    4056 start.go:360] acquireMachinesLock for running-upgrade-763000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:42:21.217943    4045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/config.json ...
	I1009 12:42:21.218360    4045 machine.go:93] provisionDockerMachine start ...
	I1009 12:42:21.218452    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.218755    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.218762    4045 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 12:42:21.279299    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 12:42:21.279325    4045 buildroot.go:166] provisioning hostname "stopped-upgrade-220000"
	I1009 12:42:21.279416    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.279532    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.279537    4045 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-220000 && echo "stopped-upgrade-220000" | sudo tee /etc/hostname
	I1009 12:42:21.339590    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-220000
	
	I1009 12:42:21.339661    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.339769    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.339778    4045 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-220000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-220000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-220000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 12:42:21.399658    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:21.399672    4045 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 12:42:21.399687    4045 buildroot.go:174] setting up certificates
	I1009 12:42:21.399691    4045 provision.go:84] configureAuth start
	I1009 12:42:21.399714    4045 provision.go:143] copyHostCerts
	I1009 12:42:21.399838    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem, removing ...
	I1009 12:42:21.399856    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem
	I1009 12:42:21.399972    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 12:42:21.400172    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem, removing ...
	I1009 12:42:21.400176    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem
	I1009 12:42:21.400228    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 12:42:21.400342    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem, removing ...
	I1009 12:42:21.400345    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem
	I1009 12:42:21.400394    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 12:42:21.400539    4045 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-220000 san=[127.0.0.1 localhost minikube stopped-upgrade-220000]
	I1009 12:42:21.505654    4045 provision.go:177] copyRemoteCerts
	I1009 12:42:21.506220    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 12:42:21.506231    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:21.535897    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 12:42:21.543173    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 12:42:21.550078    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 12:42:21.556616    4045 provision.go:87] duration metric: took 156.922083ms to configureAuth
	I1009 12:42:21.556625    4045 buildroot.go:189] setting minikube options for container-runtime
	I1009 12:42:21.556727    4045 config.go:182] Loaded profile config "stopped-upgrade-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:21.556776    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.556945    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.556950    4045 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 12:42:21.617218    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 12:42:21.617230    4045 buildroot.go:70] root file system type: tmpfs
	I1009 12:42:21.617288    4045 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 12:42:21.617348    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.617454    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.617487    4045 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 12:42:21.678077    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 12:42:21.678137    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.678245    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.678253    4045 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 12:42:22.161126    4056 start.go:364] duration metric: took 11.636869s to acquireMachinesLock for "running-upgrade-763000"
	I1009 12:42:22.161174    4056 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:42:22.161181    4056 fix.go:54] fixHost starting: 
	I1009 12:42:22.162081    4056 fix.go:112] recreateIfNeeded on running-upgrade-763000: state=Running err=<nil>
	W1009 12:42:22.162093    4056 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:42:22.169232    4056 out.go:177] * Updating the running qemu2 "running-upgrade-763000" VM ...
	I1009 12:42:22.059211    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 12:42:22.059230    4045 machine.go:96] duration metric: took 840.886708ms to provisionDockerMachine
	I1009 12:42:22.059237    4045 start.go:293] postStartSetup for "stopped-upgrade-220000" (driver="qemu2")
	I1009 12:42:22.059243    4045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 12:42:22.059328    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 12:42:22.059341    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:22.092143    4045 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 12:42:22.093459    4045 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 12:42:22.093466    4045 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 12:42:22.093557    4045 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 12:42:22.093709    4045 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem -> 16862.pem in /etc/ssl/certs
	I1009 12:42:22.093918    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 12:42:22.096441    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:22.103646    4045 start.go:296] duration metric: took 44.404625ms for postStartSetup
	I1009 12:42:22.103660    4045 fix.go:56] duration metric: took 20.304353375s for fixHost
	I1009 12:42:22.103703    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.103808    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:22.103814    4045 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 12:42:22.160929    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728502942.516700962
	
	I1009 12:42:22.160938    4045 fix.go:216] guest clock: 1728502942.516700962
	I1009 12:42:22.160942    4045 fix.go:229] Guest: 2024-10-09 12:42:22.516700962 -0700 PDT Remote: 2024-10-09 12:42:22.103662 -0700 PDT m=+20.506754126 (delta=413.038962ms)
	I1009 12:42:22.160952    4045 fix.go:200] guest clock delta is within tolerance: 413.038962ms
	I1009 12:42:22.160955    4045 start.go:83] releasing machines lock for "stopped-upgrade-220000", held for 20.361656083s
	I1009 12:42:22.161042    4045 ssh_runner.go:195] Run: cat /version.json
	I1009 12:42:22.161056    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:22.161043    4045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 12:42:22.161123    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	W1009 12:42:22.161703    4045 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53878->127.0.0.1:53646: read: connection reset by peer
	I1009 12:42:22.161721    4045 retry.go:31] will retry after 304.87533ms: ssh: handshake failed: read tcp 127.0.0.1:53878->127.0.0.1:53646: read: connection reset by peer
	W1009 12:42:22.190184    4045 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 12:42:22.190255    4045 ssh_runner.go:195] Run: systemctl --version
	I1009 12:42:22.192057    4045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 12:42:22.193659    4045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 12:42:22.193694    4045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1009 12:42:22.196693    4045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1009 12:42:22.201542    4045 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 12:42:22.201549    4045 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.201682    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.208784    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1009 12:42:22.212295    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 12:42:22.215706    4045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.215741    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 12:42:22.219381    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.222572    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 12:42:22.225310    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.228338    4045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 12:42:22.231735    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 12:42:22.235137    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 12:42:22.238419    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 12:42:22.241479    4045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 12:42:22.244300    4045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 12:42:22.247579    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:22.327092    4045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 12:42:22.334434    4045 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.334679    4045 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 12:42:22.341263    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:22.347108    4045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 12:42:22.355136    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:22.360466    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:22.365611    4045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 12:42:22.420580    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:22.427502    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.435644    4045 ssh_runner.go:195] Run: which cri-dockerd
	I1009 12:42:22.437706    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 12:42:22.442909    4045 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 12:42:22.450908    4045 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 12:42:22.540916    4045 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 12:42:22.619766    4045 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.619848    4045 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 12:42:22.626180    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:22.710867    4045 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:23.833609    4045 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122753917s)
	I1009 12:42:23.833685    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1009 12:42:23.838797    4045 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1009 12:42:23.845649    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:23.850387    4045 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 12:42:23.929808    4045 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 12:42:24.010743    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:24.086133    4045 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 12:42:24.091656    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:24.096726    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:24.175413    4045 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1009 12:42:24.214829    4045 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 12:42:24.214931    4045 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 12:42:24.217704    4045 start.go:563] Will wait 60s for crictl version
	I1009 12:42:24.217769    4045 ssh_runner.go:195] Run: which crictl
	I1009 12:42:24.219047    4045 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 12:42:24.233587    4045 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1009 12:42:24.233665    4045 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:24.252327    4045 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:22.173092    4056 machine.go:93] provisionDockerMachine start ...
	I1009 12:42:22.173145    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.173270    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.173274    4056 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 12:42:22.234313    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I1009 12:42:22.234330    4056 buildroot.go:166] provisioning hostname "running-upgrade-763000"
	I1009 12:42:22.234375    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.234495    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.234501    4056 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-763000 && echo "running-upgrade-763000" | sudo tee /etc/hostname
	I1009 12:42:22.299631    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-763000
	
	I1009 12:42:22.299707    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.299824    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.299833    4056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-763000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-763000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-763000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 12:42:22.374079    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:22.374096    4056 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 12:42:22.374106    4056 buildroot.go:174] setting up certificates
	I1009 12:42:22.374124    4056 provision.go:84] configureAuth start
	I1009 12:42:22.374132    4056 provision.go:143] copyHostCerts
	I1009 12:42:22.374204    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem, removing ...
	I1009 12:42:22.374211    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem
	I1009 12:42:22.374320    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 12:42:22.374491    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem, removing ...
	I1009 12:42:22.374495    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem
	I1009 12:42:22.374539    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 12:42:22.374649    4056 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem, removing ...
	I1009 12:42:22.374652    4056 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem
	I1009 12:42:22.374693    4056 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 12:42:22.374791    4056 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-763000 san=[127.0.0.1 localhost minikube running-upgrade-763000]
	I1009 12:42:22.456781    4056 provision.go:177] copyRemoteCerts
	I1009 12:42:22.456943    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 12:42:22.456962    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.490551    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 12:42:22.497823    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 12:42:22.505783    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 12:42:22.516484    4056 provision.go:87] duration metric: took 142.35775ms to configureAuth
	I1009 12:42:22.516497    4056 buildroot.go:189] setting minikube options for container-runtime
	I1009 12:42:22.516641    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:22.516689    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.516775    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.516781    4056 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 12:42:22.580595    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 12:42:22.580606    4056 buildroot.go:70] root file system type: tmpfs
	I1009 12:42:22.580680    4056 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 12:42:22.580754    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.580879    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.580913    4056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 12:42:22.646106    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 12:42:22.646180    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.646298    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.646308    4056 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 12:42:22.711149    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:22.711161    4056 machine.go:96] duration metric: took 538.078167ms to provisionDockerMachine
	I1009 12:42:22.711167    4056 start.go:293] postStartSetup for "running-upgrade-763000" (driver="qemu2")
	I1009 12:42:22.711179    4056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 12:42:22.711218    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 12:42:22.711228    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.743717    4056 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 12:42:22.745013    4056 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 12:42:22.745021    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 12:42:22.745096    4056 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 12:42:22.745185    4056 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem -> 16862.pem in /etc/ssl/certs
	I1009 12:42:22.745288    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 12:42:22.747904    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:22.754615    4056 start.go:296] duration metric: took 43.443959ms for postStartSetup
	I1009 12:42:22.754631    4056 fix.go:56] duration metric: took 593.468459ms for fixHost
	I1009 12:42:22.754674    4056 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.754788    4056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008e6480] 0x1008e8cc0 <nil>  [] 0s} localhost 53683 <nil> <nil>}
	I1009 12:42:22.754793    4056 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 12:42:22.820401    4056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728502943.081851208
	
	I1009 12:42:22.820412    4056 fix.go:216] guest clock: 1728502943.081851208
	I1009 12:42:22.820417    4056 fix.go:229] Guest: 2024-10-09 12:42:23.081851208 -0700 PDT Remote: 2024-10-09 12:42:22.754632 -0700 PDT m=+12.344032710 (delta=327.219208ms)
	I1009 12:42:22.820430    4056 fix.go:200] guest clock delta is within tolerance: 327.219208ms
	I1009 12:42:22.820433    4056 start.go:83] releasing machines lock for "running-upgrade-763000", held for 659.307334ms
	I1009 12:42:22.820519    4056 ssh_runner.go:195] Run: cat /version.json
	I1009 12:42:22.820529    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:42:22.820520    4056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 12:42:22.820560    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	W1009 12:42:22.821169    4056 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53683: connect: connection refused
	I1009 12:42:22.821194    4056 retry.go:31] will retry after 174.832429ms: dial tcp [::1]:53683: connect: connection refused
	W1009 12:42:22.853324    4056 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 12:42:22.853382    4056 ssh_runner.go:195] Run: systemctl --version
	I1009 12:42:22.855272    4056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 12:42:22.856874    4056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 12:42:22.856908    4056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1009 12:42:22.861369    4056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1009 12:42:22.870919    4056 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 12:42:22.870935    4056 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.871006    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.877608    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1009 12:42:22.889061    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 12:42:22.892695    4056 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.892757    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 12:42:22.895874    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.898614    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 12:42:22.901688    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.905171    4056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 12:42:22.909367    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 12:42:22.913950    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 12:42:22.918019    4056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 12:42:22.926790    4056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 12:42:22.931170    4056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 12:42:22.935317    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:23.063260    4056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 12:42:23.077757    4056 start.go:495] detecting cgroup driver to use...
	I1009 12:42:23.077857    4056 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 12:42:23.086641    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:23.137747    4056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 12:42:23.150479    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:23.156202    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:23.160871    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:23.166575    4056 ssh_runner.go:195] Run: which cri-dockerd
	I1009 12:42:23.167977    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 12:42:23.171001    4056 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 12:42:23.176197    4056 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 12:42:23.299765    4056 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 12:42:23.414555    4056 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 12:42:23.414688    4056 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 12:42:23.424788    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:23.521398    4056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:25.791190    4056 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.269840958s)
	I1009 12:42:25.791281    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1009 12:42:25.796926    4056 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1009 12:42:25.805807    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:25.811050    4056 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 12:42:25.905420    4056 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 12:42:26.000139    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.091057    4056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 12:42:26.098440    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:26.104484    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.192095    4056 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1009 12:42:26.241537    4056 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 12:42:26.241651    4056 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 12:42:26.244367    4056 start.go:563] Will wait 60s for crictl version
	I1009 12:42:26.244439    4056 ssh_runner.go:195] Run: which crictl
	I1009 12:42:26.246168    4056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 12:42:26.260011    4056 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1009 12:42:26.260092    4056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:26.275351    4056 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:24.272692    4045 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1009 12:42:24.272807    4045 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1009 12:42:24.274424    4045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 12:42:24.278809    4045 kubeadm.go:883] updating cluster {Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1009 12:42:24.278859    4045 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:24.278924    4045 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:24.290514    4045 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:24.290524    4045 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:24.290587    4045 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:24.293984    4045 ssh_runner.go:195] Run: which lz4
	I1009 12:42:24.295594    4045 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 12:42:24.297369    4045 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 12:42:24.297394    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1009 12:42:25.284738    4045 docker.go:649] duration metric: took 989.218333ms to copy over tarball
	I1009 12:42:25.284813    4045 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 12:42:26.478324    4045 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.193525125s)
	I1009 12:42:26.478338    4045 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 12:42:26.495219    4045 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:26.499002    4045 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1009 12:42:26.504944    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.593535    4045 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:26.299801    4056 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1009 12:42:26.299903    4056 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1009 12:42:26.301552    4056 kubeadm.go:883] updating cluster {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1009 12:42:26.301606    4056 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:26.301662    4056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:26.314357    4056 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:26.314365    4056 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:26.314440    4056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:26.318359    4056 ssh_runner.go:195] Run: which lz4
	I1009 12:42:26.320155    4056 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 12:42:26.321675    4056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 12:42:26.321696    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1009 12:42:27.260257    4056 docker.go:649] duration metric: took 940.188792ms to copy over tarball
	I1009 12:42:27.260330    4056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 12:42:28.368217    4056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.107904167s)
	I1009 12:42:28.368238    4056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 12:42:28.386031    4056 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:28.389183    4056 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1009 12:42:28.394313    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:28.481429    4056 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:29.032655    4056 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:29.053077    4056 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:29.053088    4056 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:29.053092    4056 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 12:42:29.057400    4056 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:29.060050    4056 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.062399    4056 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.062476    4056 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:29.065652    4056 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.066218    4056 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.068702    4056 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.068743    4056 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.071977    4056 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1009 12:42:29.072053    4056 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.074908    4056 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.075088    4056 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.077276    4056 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1009 12:42:29.077379    4056 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.078362    4056 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.079132    4056 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.533125    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.544363    4056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1009 12:42:29.544403    4056 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.544465    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:29.555342    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1009 12:42:29.565966    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.576942    4056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1009 12:42:29.577021    4056 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.577071    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:29.589878    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1009 12:42:29.598788    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.610552    4056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1009 12:42:29.610596    4056 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.610659    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:29.621387    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1009 12:42:29.630507    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.642261    4056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1009 12:42:29.642285    4056 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.642351    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.655212    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1009 12:42:29.717199    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1009 12:42:29.728487    4056 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1009 12:42:29.728518    4056 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1009 12:42:29.728584    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1009 12:42:29.739677    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1009 12:42:29.739813    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.741479    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1009 12:42:29.741490    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1009 12:42:29.749428    4056 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.749440    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1009 12:42:29.776353    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1009 12:42:29.802811    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.813787    4056 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1009 12:42:29.813812    4056 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.813880    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:29.825213    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1009 12:42:29.825339    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:29.826880    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1009 12:42:29.826898    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W1009 12:42:29.848006    4056 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:29.848158    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.909717    4056 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1009 12:42:29.909744    4056 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.909808    4056 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:29.940648    4056 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1009 12:42:29.940776    4056 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:29.953752    4056 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1009 12:42:29.953779    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1009 12:42:30.047180    4056 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:30.047198    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1009 12:42:30.150463    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1009 12:42:30.150484    4056 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:30.150489    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1009 12:42:30.292051    4056 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1009 12:42:28.121175    4045 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.52766525s)
	I1009 12:42:28.121283    4045 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:28.134275    4045 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:28.134295    4045 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:28.134301    4045 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 12:42:28.141062    4045 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.142944    4045 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.144548    4045 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.144674    4045 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.145439    4045 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.145603    4045 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.147636    4045 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:28.147643    4045 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.149362    4045 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.149381    4045 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.150576    4045 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:28.150682    4045 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:28.151812    4045 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.152236    4045 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1009 12:42:28.152769    4045 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:28.154361    4045 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1009 12:42:28.620835    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.635738    4045 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1009 12:42:28.637242    4045 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.637296    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W1009 12:42:28.638380    4045 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:28.638482    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.649465    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1009 12:42:28.652274    4045 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1009 12:42:28.652295    4045 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.652349    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.664202    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1009 12:42:28.664355    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:28.666119    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1009 12:42:28.666136    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1009 12:42:28.704327    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.708840    4045 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:28.708861    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1009 12:42:28.722515    4045 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1009 12:42:28.722542    4045 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.722601    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.751517    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.765902    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1009 12:42:28.765939    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1009 12:42:28.766089    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:28.767491    4045 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1009 12:42:28.767510    4045 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.767559    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.768112    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1009 12:42:28.768124    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1009 12:42:28.790272    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1009 12:42:28.888792    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.904632    4045 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1009 12:42:28.904659    4045 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.904727    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.942238    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1009 12:42:28.996749    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.035764    4045 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1009 12:42:29.035789    4045 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.035864    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.038313    4045 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:29.038348    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1009 12:42:29.051219    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1009 12:42:29.061265    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1009 12:42:29.201753    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1009 12:42:29.201807    4045 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1009 12:42:29.201827    4045 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1009 12:42:29.201866    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1009 12:42:29.212241    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1009 12:42:29.212374    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.214399    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1009 12:42:29.214411    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1009 12:42:29.223651    4045 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.223665    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1009 12:42:29.251543    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1009 12:42:31.810874    4056 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:31.812278    4056 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.849914    4056 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 12:42:31.849963    4056 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.850107    4056 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:31.869597    4056 cache_images.go:92] duration metric: took 2.816573166s to LoadCachedImages
	W1009 12:42:31.869662    4056 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1009 12:42:31.869672    4056 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1009 12:42:31.869746    4056 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-763000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 12:42:31.869832    4056 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 12:42:31.887427    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:42:31.887439    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:31.887445    4056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 12:42:31.887454    4056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-763000 NodeName:running-upgrade-763000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 12:42:31.887531    4056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-763000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 12:42:31.887612    4056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1009 12:42:31.890679    4056 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 12:42:31.890718    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 12:42:31.893784    4056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1009 12:42:31.898865    4056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 12:42:31.903846    4056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1009 12:42:31.909599    4056 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1009 12:42:31.910833    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:31.990409    4056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:42:31.996694    4056 certs.go:68] Setting up /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000 for IP: 10.0.2.15
	I1009 12:42:31.996705    4056 certs.go:194] generating shared ca certs ...
	I1009 12:42:31.996715    4056 certs.go:226] acquiring lock for ca certs: {Name:mkbf858b3b2074a12d126c3a2fed20f98f420e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:31.997051    4056 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key
	I1009 12:42:31.997283    4056 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key
	I1009 12:42:31.997290    4056 certs.go:256] generating profile certs ...
	I1009 12:42:31.997537    4056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key
	I1009 12:42:31.997552    4056 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee
	I1009 12:42:31.997560    4056 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1009 12:42:32.077281    4056 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee ...
	I1009 12:42:32.077293    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee: {Name:mk01607440c75d660555c30ff5d21966b49fe6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.077574    4056 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee ...
	I1009 12:42:32.077580    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee: {Name:mk2f700d3fcca1f4332e1fcf937d6867d9e88c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.077752    4056 certs.go:381] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt.7e4819ee -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt
	I1009 12:42:32.077875    4056 certs.go:385] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key.7e4819ee -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key
	I1009 12:42:32.078131    4056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.key
	I1009 12:42:32.078295    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem (1338 bytes)
	W1009 12:42:32.078433    4056 certs.go:480] ignoring /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686_empty.pem, impossibly tiny 0 bytes
	I1009 12:42:32.078441    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 12:42:32.078462    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem (1078 bytes)
	I1009 12:42:32.078483    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem (1123 bytes)
	I1009 12:42:32.078500    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem (1679 bytes)
	I1009 12:42:32.078545    4056 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:32.080011    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 12:42:32.088808    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 12:42:32.097017    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 12:42:32.105233    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 12:42:32.112960    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 12:42:32.120202    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 12:42:32.128147    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 12:42:32.136106    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 12:42:32.144381    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /usr/share/ca-certificates/16862.pem (1708 bytes)
	I1009 12:42:32.152296    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 12:42:32.159552    4056 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem --> /usr/share/ca-certificates/1686.pem (1338 bytes)
	I1009 12:42:32.167230    4056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 12:42:32.172708    4056 ssh_runner.go:195] Run: openssl version
	I1009 12:42:32.174805    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16862.pem && ln -fs /usr/share/ca-certificates/16862.pem /etc/ssl/certs/16862.pem"
	I1009 12:42:32.178193    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.179558    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:49 /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.179596    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.181714    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16862.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 12:42:32.184516    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 12:42:32.188090    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.189741    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.189775    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.191665    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 12:42:32.195025    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1686.pem && ln -fs /usr/share/ca-certificates/1686.pem /etc/ssl/certs/1686.pem"
	I1009 12:42:32.198214    4056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.199772    4056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:49 /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.199806    4056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.201842    4056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1686.pem /etc/ssl/certs/51391683.0"
	I1009 12:42:32.204877    4056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 12:42:32.206780    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 12:42:32.208659    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 12:42:32.210615    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 12:42:32.212832    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 12:42:32.215858    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 12:42:32.217931    4056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 12:42:32.219849    4056 kubeadm.go:392] StartCluster: {Name:running-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:32.219928    4056 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.231199    4056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 12:42:32.234976    4056 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 12:42:32.234983    4056 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 12:42:32.235025    4056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 12:42:32.238254    4056 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.239532    4056 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-763000" does not appear in /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:32.239593    4056 kubeconfig.go:62] /Users/jenkins/minikube-integration/19780-1164/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-763000" cluster setting kubeconfig missing "running-upgrade-763000" context setting]
	I1009 12:42:32.239984    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.240755    4056 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c0f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:42:32.246001    4056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 12:42:32.249273    4056 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-763000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1009 12:42:32.249280    4056 kubeadm.go:1160] stopping kube-system containers ...
	I1009 12:42:32.249336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.260852    4056 docker.go:483] Stopping containers: [60b710e3ac8d 0fe6dcae56d3 ec3f65181026 5ae86cb0a43f 292e40c297f5 6ad25cea7b79 ef6bd7897f53 ae0a291f9f06 557a401ad1a9 301a37b51d64 6c7a674ad960 120043bae0b5 21acea369545 a29f202107da f9e43d160ee4 a2ea44b2098d]
	I1009 12:42:32.260926    4056 ssh_runner.go:195] Run: docker stop 60b710e3ac8d 0fe6dcae56d3 ec3f65181026 5ae86cb0a43f 292e40c297f5 6ad25cea7b79 ef6bd7897f53 ae0a291f9f06 557a401ad1a9 301a37b51d64 6c7a674ad960 120043bae0b5 21acea369545 a29f202107da f9e43d160ee4 a2ea44b2098d
	I1009 12:42:32.276031    4056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 12:42:32.369076    4056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:42:32.373868    4056 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  9 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  9 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  9 19:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  9 19:41 /etc/kubernetes/scheduler.conf
	
	I1009 12:42:32.373920    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf
	I1009 12:42:32.377199    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.377237    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:42:32.380821    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf
	I1009 12:42:32.384658    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.384710    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:42:32.388265    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.391893    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.391937    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.395140    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf
	I1009 12:42:32.398271    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.398312    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:42:32.401470    4056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:42:32.404829    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:32.430239    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.236856    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.507086    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.531953    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.557030    4056 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:42:33.557126    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.059513    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.559272    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.564181    4056 api_server.go:72] duration metric: took 1.007177833s to wait for apiserver process to appear ...
	I1009 12:42:34.564195    4056 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:42:34.564215    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1009 12:42:32.043151    4045 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:32.043243    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.054670    4045 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 12:42:32.054691    4045 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.054749    4045 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.069807    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 12:42:32.069950    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 12:42:32.071288    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1009 12:42:32.071304    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1009 12:42:32.100076    4045 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 12:42:32.100089    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1009 12:42:32.358408    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 12:42:32.358447    4045 cache_images.go:92] duration metric: took 4.224259625s to LoadCachedImages
	W1009 12:42:32.358485    4045 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1009 12:42:32.358495    4045 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1009 12:42:32.358555    4045 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-220000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 12:42:32.358629    4045 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 12:42:32.372709    4045 cni.go:84] Creating CNI manager for ""
	I1009 12:42:32.372722    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:32.372729    4045 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 12:42:32.372737    4045 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-220000 NodeName:stopped-upgrade-220000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 12:42:32.372818    4045 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-220000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 12:42:32.372877    4045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1009 12:42:32.376234    4045 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 12:42:32.376283    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 12:42:32.379575    4045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1009 12:42:32.385288    4045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 12:42:32.390815    4045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1009 12:42:32.397410    4045 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1009 12:42:32.398808    4045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 12:42:32.403081    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:32.486911    4045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:42:32.493449    4045 certs.go:68] Setting up /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000 for IP: 10.0.2.15
	I1009 12:42:32.493457    4045 certs.go:194] generating shared ca certs ...
	I1009 12:42:32.493468    4045 certs.go:226] acquiring lock for ca certs: {Name:mkbf858b3b2074a12d126c3a2fed20f98f420e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.493618    4045 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key
	I1009 12:42:32.493678    4045 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key
	I1009 12:42:32.493685    4045 certs.go:256] generating profile certs ...
	I1009 12:42:32.494530    4045 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key
	I1009 12:42:32.494552    4045 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d
	I1009 12:42:32.494562    4045 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1009 12:42:32.538865    4045 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d ...
	I1009 12:42:32.538884    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d: {Name:mk636c31666e9b6925eca9992cc4574f1553d5aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.539323    4045 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d ...
	I1009 12:42:32.539332    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d: {Name:mkebc0ee7e2a420801c61f60a85aae3f650ed1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.539508    4045 certs.go:381] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt
	I1009 12:42:32.539639    4045 certs.go:385] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key
	I1009 12:42:32.539891    4045 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.key
	I1009 12:42:32.540032    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem (1338 bytes)
	W1009 12:42:32.540058    4045 certs.go:480] ignoring /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686_empty.pem, impossibly tiny 0 bytes
	I1009 12:42:32.540065    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 12:42:32.540087    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem (1078 bytes)
	I1009 12:42:32.540108    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem (1123 bytes)
	I1009 12:42:32.540127    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem (1679 bytes)
	I1009 12:42:32.540168    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:32.540720    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 12:42:32.551874    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 12:42:32.559904    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 12:42:32.567688    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 12:42:32.575516    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 12:42:32.583003    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 12:42:32.590801    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 12:42:32.598147    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 12:42:32.604948    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /usr/share/ca-certificates/16862.pem (1708 bytes)
	I1009 12:42:32.611975    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 12:42:32.619380    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem --> /usr/share/ca-certificates/1686.pem (1338 bytes)
	I1009 12:42:32.626852    4045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 12:42:32.632013    4045 ssh_runner.go:195] Run: openssl version
	I1009 12:42:32.634096    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 12:42:32.637260    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.638712    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.638740    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.640586    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 12:42:32.644240    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1686.pem && ln -fs /usr/share/ca-certificates/1686.pem /etc/ssl/certs/1686.pem"
	I1009 12:42:32.647645    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.649299    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:49 /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.649330    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.651099    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1686.pem /etc/ssl/certs/51391683.0"
	I1009 12:42:32.654172    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16862.pem && ln -fs /usr/share/ca-certificates/16862.pem /etc/ssl/certs/16862.pem"
	I1009 12:42:32.657261    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.658632    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:49 /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.658655    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.660414    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16862.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 12:42:32.663935    4045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 12:42:32.665488    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 12:42:32.667970    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 12:42:32.670111    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 12:42:32.672228    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 12:42:32.674237    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 12:42:32.676095    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 12:42:32.678001    4045 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:32.678087    4045 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.688536    4045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 12:42:32.691976    4045 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 12:42:32.691983    4045 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 12:42:32.692017    4045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 12:42:32.695570    4045 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.695840    4045 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-220000" does not appear in /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:32.696155    4045 kubeconfig.go:62] /Users/jenkins/minikube-integration/19780-1164/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-220000" cluster setting kubeconfig missing "stopped-upgrade-220000" context setting]
	I1009 12:42:32.696347    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.696783    4045 kapi.go:59] client config for stopped-upgrade-220000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027600f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:42:32.697263    4045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 12:42:32.700043    4045 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-220000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1009 12:42:32.700050    4045 kubeadm.go:1160] stopping kube-system containers ...
	I1009 12:42:32.700094    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.710966    4045 docker.go:483] Stopping containers: [fa75835cea07 feeebbcd5fb9 c89b00d98989 90a81eccf4ba 7ab74f2cae22 fa3ceaf6ef5a 2ea2df6dd5b5 1de8f5d61449]
	I1009 12:42:32.711041    4045 ssh_runner.go:195] Run: docker stop fa75835cea07 feeebbcd5fb9 c89b00d98989 90a81eccf4ba 7ab74f2cae22 fa3ceaf6ef5a 2ea2df6dd5b5 1de8f5d61449
	I1009 12:42:32.721745    4045 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 12:42:32.727454    4045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:42:32.730709    4045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:42:32.730715    4045 kubeadm.go:157] found existing configuration files:
	
	I1009 12:42:32.730751    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf
	I1009 12:42:32.733220    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:42:32.733252    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:42:32.736103    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf
	I1009 12:42:32.739311    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:42:32.739345    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:42:32.742420    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.744900    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:42:32.744925    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.747937    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf
	I1009 12:42:32.750981    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:42:32.751006    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:42:32.753657    4045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:42:32.756391    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:32.778474    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.412317    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.537292    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.560371    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.587475    4045 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:42:33.587554    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.090080    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.589593    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:35.089612    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:35.093856    4045 api_server.go:72] duration metric: took 1.506426666s to wait for apiserver process to appear ...
	I1009 12:42:35.093868    4045 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:42:35.093878    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:39.566477    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:39.566534    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:40.095831    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:40.095894    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:44.566770    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:44.566826    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:45.096146    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:45.096195    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:49.567145    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:49.567173    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:50.096780    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:50.096812    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:54.567605    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:54.567641    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:55.097647    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:55.097667    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:59.568274    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:59.568327    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:00.098376    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:00.098487    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:04.569367    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:04.569413    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:05.099808    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:05.099895    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:09.571027    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:09.571075    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:10.101117    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:10.101164    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:14.572307    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:14.572352    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:15.102916    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:15.102959    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:19.574593    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:19.574672    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:20.105100    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:20.105120    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:24.577113    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:24.577158    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:25.106452    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:25.106548    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:29.578165    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:29.578215    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:30.107489    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:30.107570    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:34.580397    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:34.580976    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:34.594927    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:34.595029    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:34.607007    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:34.607101    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:34.617912    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:34.617998    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:34.633607    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:34.633690    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:34.644230    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:34.644310    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:34.657150    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:34.657234    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:34.667357    4056 logs.go:282] 0 containers: []
	W1009 12:43:34.667368    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:34.667427    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:34.685594    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:34.685612    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:34.685617    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:34.700268    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:34.700280    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:34.714601    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:34.714612    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:34.741317    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:34.741328    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:34.754959    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:34.754970    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:34.870393    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:34.870403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:34.882246    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:34.882256    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:34.896061    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:34.896074    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:34.913726    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:34.913738    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:34.921615    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:34.921625    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:34.937497    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:34.937508    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:34.949470    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:34.949480    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:34.993492    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:34.993504    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:35.004903    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:35.004913    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:35.022848    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:35.022859    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:35.039533    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:35.039543    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:35.054606    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:35.054618    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:35.067289    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:35.067303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:35.109896    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:35.109991    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:35.120754    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:35.120842    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:35.131533    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:35.131610    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:35.141808    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:35.141883    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:35.152485    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:35.152578    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:35.162770    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:35.162851    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:35.174137    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:35.174219    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:35.190665    4045 logs.go:282] 0 containers: []
	W1009 12:43:35.190679    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:35.190747    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:35.200999    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:35.201016    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:35.201021    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:35.227912    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:35.227921    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:35.232401    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:35.232408    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:35.246677    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:35.246690    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:35.276855    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:35.276865    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:35.289077    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:35.289089    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:35.305415    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:35.305426    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:35.323538    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:35.323548    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:35.417702    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:35.417712    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:35.432900    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:35.432911    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:35.450689    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:35.450699    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:35.476382    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:35.476394    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:35.489766    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:35.489778    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:35.530384    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:35.530396    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:35.547946    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:35.547956    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:35.559696    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:35.559709    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:37.580336    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:38.073588    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:42.580693    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:42.580964    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:42.601403    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:42.601513    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:42.619702    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:42.619788    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:42.631540    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:42.631622    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:42.643493    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:42.643582    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:42.654290    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:42.654369    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:42.665186    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:42.665262    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:42.674854    4056 logs.go:282] 0 containers: []
	W1009 12:43:42.674867    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:42.674933    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:42.689585    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:42.689599    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:42.689604    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:42.694635    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:42.694642    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:42.720230    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:42.720237    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:42.744137    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:42.744147    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:42.755519    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:42.755530    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:42.770339    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:42.770353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:42.781726    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:42.781737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:42.793555    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:42.793570    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:42.805819    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:42.805828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:42.843184    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:42.843195    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:42.857872    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:42.857884    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:42.869340    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:42.869353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:42.883038    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:42.883048    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:42.895039    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:42.895052    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:42.912461    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:42.912471    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:42.952467    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:42.952474    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:42.967261    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:42.967272    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:42.984713    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:42.984728    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:43.075939    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:43.076101    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:43.089317    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:43.089416    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:43.101036    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:43.101108    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:43.111776    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:43.111860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:43.122163    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:43.122253    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:43.132783    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:43.132863    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:43.143175    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:43.143245    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:43.153404    4045 logs.go:282] 0 containers: []
	W1009 12:43:43.153417    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:43.153487    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:43.164665    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:43.164683    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:43.164689    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:43.200746    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:43.200761    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:43.225556    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:43.225566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:43.239905    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:43.239919    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:43.251947    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:43.251957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:43.265986    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:43.265996    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:43.277682    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:43.277697    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:43.289226    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:43.289236    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:43.302283    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:43.302295    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:43.325989    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:43.325997    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:43.362758    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:43.362765    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:43.366590    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:43.366599    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:43.384168    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:43.384182    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:43.400799    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:43.400809    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:43.411922    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:43.411934    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:43.426908    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:43.426919    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:45.939937    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:45.498248    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:50.942112    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:50.942266    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:50.952626    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:50.952697    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:50.970200    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:50.970290    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:50.980737    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:50.980810    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:50.991873    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:50.991951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:51.002145    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:51.002221    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:51.012482    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:51.012559    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:51.022703    4045 logs.go:282] 0 containers: []
	W1009 12:43:51.022713    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:51.022775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:51.036349    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:51.036367    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:51.036373    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:51.050356    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:51.050368    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:51.064926    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:51.064936    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:51.078365    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:51.078376    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:51.092770    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:51.092780    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:51.109984    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:51.109995    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:51.121461    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:51.121476    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:51.133296    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:51.133317    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:51.145755    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:51.145766    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:51.170930    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:51.170940    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:51.209759    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:51.209769    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:51.222431    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:51.222441    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:51.239728    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:51.239738    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:51.263645    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:51.263653    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:51.267959    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:51.267964    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:51.279748    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:51.279761    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:50.500490    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:50.500821    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:50.527168    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:50.527315    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:50.544597    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:50.544686    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:50.557820    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:50.557913    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:50.575772    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:50.575858    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:50.586754    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:50.586826    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:50.597374    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:50.597470    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:50.607887    4056 logs.go:282] 0 containers: []
	W1009 12:43:50.607897    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:50.607963    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:50.618747    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:50.618772    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:50.618778    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:50.633149    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:50.633158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:50.645666    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:50.645678    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:50.657290    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:50.657303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:50.674982    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:50.674993    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:50.713431    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:50.713447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:50.735455    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:50.735467    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:50.746335    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:50.746346    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:50.764019    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:50.764033    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:50.779996    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:50.780009    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:50.791702    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:50.791712    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:50.795720    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:50.795729    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:50.821661    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:50.821671    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:50.832921    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:50.832940    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:50.844168    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:50.844180    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:50.858346    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:50.858359    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:50.870462    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:50.870473    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:50.886110    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:50.886121    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:53.431077    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:53.821440    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:58.433730    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:58.434306    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:58.485629    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:43:58.485764    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:58.515523    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:43:58.515616    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:58.528918    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:43:58.529001    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:58.539292    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:43:58.539374    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:58.550121    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:43:58.550206    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:58.561028    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:43:58.561113    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:58.571619    4056 logs.go:282] 0 containers: []
	W1009 12:43:58.571629    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:58.571700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:58.582812    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:43:58.582830    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:43:58.582837    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:43:58.594653    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:43:58.594666    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:43:58.606835    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:58.606847    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:58.632931    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:43:58.632946    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:43:58.644674    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:58.644686    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:58.649081    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:43:58.649087    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:43:58.660219    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:43:58.660229    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:43:58.677898    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:43:58.677909    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:43:58.692678    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:58.692688    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:58.734850    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:43:58.734857    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:43:58.748107    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:43:58.748117    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:43:58.763761    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:43:58.763771    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:43:58.781628    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:43:58.781636    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:58.793577    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:58.793593    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:58.837020    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:43:58.837032    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:43:58.850774    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:43:58.850784    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:43:58.867218    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:43:58.867235    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:43:58.880026    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:43:58.880042    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:43:58.823513    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:58.823638    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:58.835283    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:58.835373    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:58.846672    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:58.846755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:58.859854    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:58.859934    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:58.871710    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:58.871808    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:58.883484    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:58.883562    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:58.895613    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:58.895690    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:58.909613    4045 logs.go:282] 0 containers: []
	W1009 12:43:58.909626    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:58.909700    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:58.919445    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:58.919473    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:58.919479    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:58.958387    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:58.958403    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:58.994228    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:58.994238    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:59.008464    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:59.008475    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:59.023915    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:59.023927    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:59.046492    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:59.046510    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:59.061893    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:59.061903    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:59.073510    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:59.073522    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:59.085311    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:59.085323    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:59.114899    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:59.114911    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:59.128681    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:59.128692    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:59.147405    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:59.147415    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:59.173417    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:59.173425    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:59.177361    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:59.177369    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:59.188992    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:59.189002    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:59.200483    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:59.200496    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:01.398674    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:01.713396    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:06.401479    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:06.402026    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:06.439615    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:06.439776    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:06.461533    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:06.461670    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:06.484338    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:06.484423    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:06.498574    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:06.498658    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:06.509381    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:06.509454    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:06.520139    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:06.520221    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:06.530627    4056 logs.go:282] 0 containers: []
	W1009 12:44:06.530644    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:06.530716    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:06.541410    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:06.541430    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:06.541435    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:06.553799    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:06.553813    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:06.571986    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:06.571997    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:06.615058    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:06.615069    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:06.619396    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:06.619403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:06.633640    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:06.633651    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:06.646049    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:06.646063    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:06.662695    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:06.662705    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:06.675479    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:06.675491    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:06.700443    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:06.700453    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:06.716011    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:06.716021    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:06.727742    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:06.727753    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:06.739833    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:06.739846    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:06.767096    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:06.767106    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:06.780675    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:06.780685    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:06.820246    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:06.820260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:06.835715    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:06.835726    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:06.850804    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:06.850814    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:09.365332    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:06.715321    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:06.715429    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:06.732907    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:06.732992    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:06.744191    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:06.744270    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:06.755664    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:06.755742    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:06.767078    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:06.767168    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:06.779185    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:06.779268    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:06.791082    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:06.791160    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:06.802129    4045 logs.go:282] 0 containers: []
	W1009 12:44:06.802143    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:06.802211    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:06.813302    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:06.813322    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:06.813327    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:06.854832    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:06.854849    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:06.882171    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:06.882183    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:06.896555    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:06.896565    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:06.907978    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:06.907987    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:06.931469    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:06.931475    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:06.943219    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:06.943228    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:06.978241    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:06.978256    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:06.992402    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:06.992416    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:07.004346    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:07.004357    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:07.018356    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:07.018366    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:07.029504    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:07.029514    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:07.033761    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:07.033768    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:07.047241    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:07.047252    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:07.058584    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:07.058596    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:07.073546    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:07.073556    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:09.593294    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:14.367722    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:14.368281    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:14.407904    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:14.408066    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:14.430174    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:14.430309    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:14.445791    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:14.445879    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:14.458139    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:14.458227    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:14.471519    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:14.472536    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:14.483706    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:14.483789    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:14.493873    4056 logs.go:282] 0 containers: []
	W1009 12:44:14.493884    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:14.493946    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:14.511461    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:14.511476    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:14.511483    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:14.516126    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:14.516134    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:14.530527    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:14.530541    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:14.541996    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:14.542009    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:14.557072    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:14.557083    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:14.569235    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:14.569247    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:14.588271    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:14.588283    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:14.601425    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:14.601438    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:14.614220    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:14.614236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:14.627506    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:14.627519    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:14.640186    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:14.640198    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:14.680800    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:14.680816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:14.701170    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:14.701186    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:14.720251    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:14.720259    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:14.747802    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:14.747813    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:14.792980    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:14.792991    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:14.808559    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:14.808576    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:14.820953    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:14.820966    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:14.593765    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:14.593862    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:14.605813    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:14.605896    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:14.618313    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:14.618393    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:14.637433    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:14.637520    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:14.649104    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:14.649186    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:14.660108    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:14.660190    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:14.671544    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:14.671629    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:14.682727    4045 logs.go:282] 0 containers: []
	W1009 12:44:14.682738    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:14.682811    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:14.706684    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:14.706702    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:14.706708    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:14.719663    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:14.719676    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:14.760556    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:14.760566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:14.775382    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:14.775394    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:14.791079    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:14.791090    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:14.803321    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:14.803335    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:14.815901    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:14.815913    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:14.849713    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:14.849723    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:14.866925    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:14.866937    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:14.880570    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:14.880580    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:14.904999    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:14.905006    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:14.918988    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:14.918999    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:14.930831    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:14.930841    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:14.935597    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:14.935604    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:14.970196    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:14.970206    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:14.985661    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:14.985671    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:17.346845    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:17.498024    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:22.349525    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:22.350172    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:22.391755    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:22.391922    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:22.413727    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:22.413845    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:22.429191    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:22.429282    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:22.441653    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:22.441744    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:22.452474    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:22.452546    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:22.464087    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:22.464170    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:22.475689    4056 logs.go:282] 0 containers: []
	W1009 12:44:22.475703    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:22.475774    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:22.486744    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:22.486758    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:22.486763    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:22.528450    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:22.528464    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:22.570034    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:22.570045    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:22.585794    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:22.585810    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:22.599173    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:22.599185    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:22.613842    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:22.613857    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:22.629475    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:22.629487    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:22.634684    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:22.634696    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:22.646767    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:22.646779    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:22.663098    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:22.663109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:22.676726    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:22.676737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:22.689302    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:22.689316    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:22.709814    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:22.709829    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:22.737273    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:22.737285    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:22.767244    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:22.767260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:22.790049    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:22.790064    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:22.813193    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:22.813201    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:22.827878    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:22.827890    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:25.343272    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:22.499575    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:22.499688    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:22.510652    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:22.510734    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:22.521626    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:22.521704    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:22.532142    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:22.532230    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:22.543395    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:22.543476    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:22.555058    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:22.555145    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:22.566782    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:22.566867    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:22.577408    4045 logs.go:282] 0 containers: []
	W1009 12:44:22.577422    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:22.577496    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:22.592480    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:22.592501    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:22.592508    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:22.619369    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:22.619386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:22.631728    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:22.631739    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:22.644784    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:22.644796    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:22.666979    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:22.666989    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:22.685482    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:22.685495    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:22.701417    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:22.701429    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:22.742815    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:22.742830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:22.756356    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:22.756369    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:22.780856    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:22.780875    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:22.793101    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:22.793111    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:22.805598    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:22.805609    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:22.810221    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:22.810230    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:22.848203    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:22.848218    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:22.863160    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:22.863171    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:22.877588    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:22.877599    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:25.394240    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:30.344292    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:30.344873    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:30.385018    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:30.385186    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:30.406185    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:30.406296    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:30.422341    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:30.422429    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:30.396584    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:30.396766    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:30.414415    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:30.414510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:30.429458    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:30.429545    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:30.441686    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:30.441771    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:30.456780    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:30.456859    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:30.470400    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:30.470477    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:30.482614    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:30.482694    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:30.494546    4045 logs.go:282] 0 containers: []
	W1009 12:44:30.494561    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:30.494635    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:30.506137    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:30.506154    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:30.506159    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:30.522013    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:30.522026    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:30.534791    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:30.534803    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:30.546940    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:30.546951    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:30.572951    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:30.572965    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:30.614376    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:30.614390    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:30.652105    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:30.652116    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:30.679192    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:30.679203    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:30.700849    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:30.700864    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:30.713389    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:30.713401    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:30.730821    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:30.730837    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:30.745429    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:30.745443    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:30.758223    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:30.758237    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:30.763268    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:30.763276    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:30.778206    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:30.778216    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:30.808418    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:30.808434    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:30.435623    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:30.435701    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:30.448218    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:30.448302    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:30.459980    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:30.460086    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:30.473364    4056 logs.go:282] 0 containers: []
	W1009 12:44:30.473375    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:30.473445    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:30.485296    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:30.485314    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:30.485319    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:30.498397    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:30.498408    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:30.513206    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:30.513217    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:30.528474    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:30.528489    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:30.541960    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:30.541971    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:30.554380    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:30.554391    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:30.566758    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:30.566770    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:30.579012    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:30.579024    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:30.591659    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:30.591672    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:30.603774    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:30.603785    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:30.625974    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:30.625987    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:30.644710    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:30.644724    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:30.672151    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:30.672163    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:30.677301    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:30.677309    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:30.721133    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:30.721148    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:30.734477    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:30.734489    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:30.750691    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:30.750705    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:30.795794    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:30.795815    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:33.322816    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:33.323006    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:38.325344    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:38.325528    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:38.340535    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:38.340581    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:38.356589    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:38.356681    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:38.369347    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:38.369387    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:38.380692    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:38.380774    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:38.392440    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:38.392484    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:38.408100    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:38.408184    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:38.418843    4056 logs.go:282] 0 containers: []
	W1009 12:44:38.418857    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:38.418931    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:38.429925    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:38.429940    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:38.429949    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:38.446321    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:38.446330    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:38.464755    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:38.464766    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:38.476954    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:38.476967    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:38.522357    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:38.522366    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:38.527455    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:38.527462    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:38.546502    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:38.546513    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:38.558220    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:38.558230    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:38.573358    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:38.573374    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:38.585915    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:38.585927    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:38.604118    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:38.604131    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:38.619646    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:38.619655    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:38.632881    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:38.632890    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:38.645822    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:38.645834    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:38.672662    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:38.672686    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:38.710921    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:38.710934    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:38.725990    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:38.726005    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:38.739052    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:38.739064    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:38.325307    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:38.325528    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:38.340226    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:38.340316    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:38.357540    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:38.357586    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:38.369136    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:38.369236    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:38.381043    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:38.381090    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:38.392244    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:38.392330    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:38.403136    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:38.403219    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:38.414739    4045 logs.go:282] 0 containers: []
	W1009 12:44:38.414749    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:38.414816    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:38.430083    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:38.430096    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:38.430100    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:38.445561    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:38.445573    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:38.460842    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:38.460854    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:38.479546    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:38.479557    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:38.502447    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:38.502460    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:38.527852    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:38.527867    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:38.568687    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:38.568703    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:38.601781    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:38.601797    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:38.618544    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:38.618557    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:38.632059    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:38.632071    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:38.669011    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:38.669022    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:38.683525    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:38.683538    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:38.687932    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:38.687942    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:38.702226    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:38.702237    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:38.714553    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:38.714565    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:38.728522    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:38.728540    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:41.243425    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:41.259188    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:46.246024    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:46.246498    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:46.279564    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:46.279706    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:46.299076    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:46.299259    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:46.314895    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:46.314995    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:46.336659    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:46.336730    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:46.349514    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:46.349591    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:46.365929    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:46.366007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:46.377830    4045 logs.go:282] 0 containers: []
	W1009 12:44:46.377840    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:46.377907    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:46.389858    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:46.389876    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:46.389882    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:46.429634    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:46.429644    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:46.434709    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:46.434721    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:46.450312    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:46.450326    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:46.466960    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:46.466978    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:46.488757    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:46.488766    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:46.529641    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:46.529651    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:46.544050    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:46.544064    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:46.558156    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:46.558169    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:46.570035    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:46.570046    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:46.582695    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:46.582707    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:46.595695    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:46.595708    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:46.622282    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:46.622294    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:46.649224    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:46.649241    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:46.662245    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:46.662257    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:46.680770    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:46.680784    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:46.261557    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:46.261856    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:46.287063    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:46.287175    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:46.306528    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:46.306617    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:46.319878    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:46.319957    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:46.331796    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:46.331872    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:46.343595    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:46.343668    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:46.355211    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:46.355290    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:46.366577    4056 logs.go:282] 0 containers: []
	W1009 12:44:46.366586    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:46.366625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:46.377714    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:46.377729    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:46.377736    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:46.416501    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:46.416515    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:46.428150    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:46.428162    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:46.442175    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:46.442192    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:46.457311    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:46.457327    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:46.471128    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:46.471143    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:46.488519    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:46.488531    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:46.500579    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:46.500591    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:46.512845    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:46.512857    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:46.540004    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:46.540019    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:46.585331    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:46.585342    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:46.598371    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:46.598381    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:46.603108    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:46.603118    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:46.617866    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:46.617877    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:46.634723    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:46.634736    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:46.650401    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:46.650409    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:46.669222    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:46.669235    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:46.687864    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:46.687876    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:49.203512    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:49.195702    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:54.205619    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:54.205722    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:54.218746    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:44:54.218838    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:54.230548    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:44:54.230630    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:54.242163    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:44:54.242247    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:54.253725    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:44:54.253810    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:54.265239    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:44:54.265317    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:54.277319    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:44:54.277407    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:54.288393    4056 logs.go:282] 0 containers: []
	W1009 12:44:54.288406    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:54.288475    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:54.300362    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:44:54.300379    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:54.300385    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:54.305274    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:44:54.305285    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:44:54.317943    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:44:54.317955    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:44:54.334018    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:44:54.334033    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:44:54.345679    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:54.345694    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:54.372119    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:44:54.372130    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:54.384572    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:44:54.384584    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:44:54.396865    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:44:54.396877    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:44:54.414782    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:54.414794    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:54.460806    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:54.460817    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:54.500342    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:44:54.500356    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:44:54.514895    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:44:54.514906    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:44:54.532595    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:44:54.532603    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:44:54.546644    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:44:54.546652    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:44:54.562833    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:44:54.562843    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:44:54.574723    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:44:54.574737    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:44:54.589342    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:44:54.589354    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:44:54.601953    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:44:54.601965    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:44:54.197987    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:54.198194    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:54.211598    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:54.211695    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:54.223331    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:54.223412    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:54.234655    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:54.234729    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:54.245735    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:54.245812    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:54.256745    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:54.256821    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:54.268172    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:54.268250    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:54.284777    4045 logs.go:282] 0 containers: []
	W1009 12:44:54.284789    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:54.284861    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:54.296062    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:54.296080    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:54.296087    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:54.322638    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:54.322654    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:54.348163    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:54.348177    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:54.389235    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:54.389247    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:54.428039    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:54.428051    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:54.440713    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:54.440725    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:54.459504    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:54.459515    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:54.476804    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:54.476817    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:54.489429    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:54.489442    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:54.502571    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:54.502581    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:54.515416    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:54.515429    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:54.530393    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:54.530403    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:54.545748    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:54.545762    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:54.562600    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:54.562612    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:54.575573    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:54.575582    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:54.580373    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:54.580384    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:57.122393    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:57.097817    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:02.124408    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:02.124569    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:02.137132    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:02.137237    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:02.150795    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:02.150836    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:02.162220    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:02.162269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:02.174182    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:02.174224    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:02.185590    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:02.185669    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:02.197835    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:02.197910    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:02.209210    4056 logs.go:282] 0 containers: []
	W1009 12:45:02.209223    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:02.209286    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:02.220722    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:02.220737    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:02.220743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:02.240255    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:02.240263    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:02.252651    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:02.252665    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:02.277636    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:02.277654    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:02.290781    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:02.290793    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:02.296108    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:02.296117    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:02.307780    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:02.307788    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:02.323017    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:02.323028    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:02.336353    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:02.336365    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:02.352729    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:02.352741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:02.370914    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:02.370926    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:02.383386    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:02.383397    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:02.401353    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:02.401364    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:02.451057    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:02.451071    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:02.493275    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:02.493288    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:02.506110    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:02.506121    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:02.520292    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:02.520303    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:02.532724    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:02.532734    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:05.046470    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:02.099391    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:02.099601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:02.114654    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:02.114748    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:02.127506    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:02.127583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:02.139143    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:02.139211    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:02.150439    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:02.150512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:02.161939    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:02.162017    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:02.174058    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:02.174136    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:02.186054    4045 logs.go:282] 0 containers: []
	W1009 12:45:02.186061    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:02.186092    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:02.197521    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:02.197538    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:02.197543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:02.212844    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:02.212854    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:02.240134    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:02.240159    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:02.254915    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:02.254925    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:02.268384    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:02.268398    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:02.280559    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:02.280569    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:02.306688    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:02.306707    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:02.319849    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:02.319867    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:02.333165    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:02.333179    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:02.347669    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:02.347681    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:02.360132    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:02.360149    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:02.401622    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:02.401631    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:02.406608    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:02.406616    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:02.441982    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:02.441993    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:02.460971    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:02.460983    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:02.486296    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:02.486313    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:05.009541    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:10.048892    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:10.049142    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:10.071132    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:10.071245    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:10.088655    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:10.088752    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:10.102960    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:10.103042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:10.114666    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:10.114750    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:10.127164    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:10.127245    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:10.143018    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:10.143118    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:10.155428    4056 logs.go:282] 0 containers: []
	W1009 12:45:10.155442    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:10.155517    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:10.166714    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:10.166729    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:10.166735    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:10.171730    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:10.171741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:10.188872    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:10.188881    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:10.208564    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:10.208574    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:10.221047    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:10.221059    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:10.239946    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:10.239964    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:10.253108    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:10.253119    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:10.273926    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:10.273936    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:10.319568    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:10.319589    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:10.356728    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:10.356740    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:10.371497    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:10.371505    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:10.386992    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:10.387000    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:10.398744    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:10.398756    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:10.412045    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:10.412057    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:10.430220    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:10.430231    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:10.011963    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:10.012579    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:10.052083    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:10.052225    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:10.074651    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:10.074775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:10.091373    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:10.091482    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:10.108462    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:10.108543    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:10.122028    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:10.122106    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:10.136621    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:10.136701    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:10.147949    4045 logs.go:282] 0 containers: []
	W1009 12:45:10.147959    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:10.148023    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:10.172379    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:10.172394    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:10.172399    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:10.188412    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:10.188424    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:10.203001    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:10.203014    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:10.215834    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:10.215846    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:10.231001    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:10.231017    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:10.246184    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:10.246198    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:10.271973    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:10.271991    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:10.276938    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:10.276947    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:10.293284    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:10.293296    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:10.331461    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:10.331479    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:10.369646    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:10.369658    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:10.385213    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:10.385223    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:10.404880    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:10.404893    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:10.422621    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:10.422633    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:10.441340    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:10.441358    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:10.454140    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:10.454152    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:10.449980    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:10.449998    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:10.468903    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:10.468917    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:10.486627    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:10.486638    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:13.012618    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:12.996222    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:18.015088    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:18.015407    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:18.042172    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:18.042282    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:18.060575    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:18.060663    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:18.074367    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:18.074477    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:18.087271    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:18.087358    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:18.099017    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:18.099102    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:18.111189    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:18.111277    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:18.122281    4056 logs.go:282] 0 containers: []
	W1009 12:45:18.122315    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:18.122398    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:18.135171    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:18.135188    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:18.135195    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:18.141540    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:18.141551    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:18.161971    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:18.161983    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:18.186843    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:18.186853    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:18.233709    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:18.233723    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:18.272977    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:18.272993    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:18.285926    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:18.285939    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:18.301929    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:18.301943    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:18.321549    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:18.321562    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:18.336850    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:18.336860    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:18.354733    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:18.354746    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:18.373220    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:18.373231    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:18.385781    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:18.385792    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:18.397786    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:18.397798    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:18.413203    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:18.413220    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:18.429385    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:18.429396    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:18.441739    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:18.441750    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:18.454317    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:18.454331    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:17.998874    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:17.999390    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:18.036246    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:18.036386    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:18.056664    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:18.056762    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:18.075089    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:18.075140    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:18.089339    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:18.089411    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:18.101199    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:18.101261    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:18.113164    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:18.113238    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:18.125265    4045 logs.go:282] 0 containers: []
	W1009 12:45:18.125275    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:18.125338    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:18.137332    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:18.137349    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:18.137356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:18.182268    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:18.182281    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:18.186892    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:18.186897    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:18.202866    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:18.202879    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:18.215768    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:18.215781    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:18.231938    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:18.231948    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:18.244459    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:18.244475    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:18.263043    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:18.263056    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:18.276722    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:18.276733    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:18.291398    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:18.291414    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:18.318537    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:18.318551    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:18.333791    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:18.333806    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:18.346233    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:18.346245    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:18.360808    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:18.360821    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:18.421517    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:18.421532    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:18.447447    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:18.447461    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:20.962796    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:20.969557    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:25.965495    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:25.965950    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:25.999429    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:25.999563    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:26.019385    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:26.019492    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:26.034577    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:26.034662    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:26.047624    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:26.047713    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:26.063992    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:26.064069    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:26.081209    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:26.081287    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:26.092778    4045 logs.go:282] 0 containers: []
	W1009 12:45:26.092791    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:26.092860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:26.108195    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:26.108211    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:26.108217    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:26.150471    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:26.150490    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:26.163655    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:26.163667    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:26.187960    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:26.187973    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:26.192556    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:26.192566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:26.210334    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:26.210343    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:26.225159    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:26.225167    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:26.241823    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:26.241838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:26.270605    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:26.270614    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:26.283042    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:26.283055    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:26.321595    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:26.321607    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:26.336829    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:26.336838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:26.351343    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:26.351355    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:26.377148    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:26.377165    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:26.388959    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:26.388970    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:26.401623    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:26.401635    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:25.971871    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:25.972121    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:26.002832    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:26.002954    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:26.022814    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:26.022925    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:26.037506    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:26.037583    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:26.049704    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:26.049785    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:26.061227    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:26.061309    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:26.077110    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:26.077202    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:26.088191    4056 logs.go:282] 0 containers: []
	W1009 12:45:26.088203    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:26.088269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:26.101576    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:26.101626    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:26.101636    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:26.114182    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:26.114193    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:26.126876    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:26.126889    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:26.143294    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:26.143308    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:26.156130    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:26.156142    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:26.181983    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:26.181993    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:26.195418    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:26.195429    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:26.208342    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:26.208353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:26.223142    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:26.223152    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:26.268120    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:26.268132    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:26.306471    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:26.306483    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:26.322225    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:26.322236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:26.335440    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:26.335452    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:26.353723    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:26.353733    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:26.373065    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:26.373079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:26.389004    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:26.389014    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:26.401644    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:26.401652    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:26.406849    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:26.406860    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:28.924189    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:28.916735    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:33.926439    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:33.926546    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:33.941141    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:33.941227    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:33.955926    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:33.955996    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:33.969568    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:33.969651    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:33.981131    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:33.981221    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:33.997239    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:33.997301    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:34.008255    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:34.008336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:34.020687    4056 logs.go:282] 0 containers: []
	W1009 12:45:34.020695    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:34.020764    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:34.031858    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:34.031874    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:34.031881    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:34.036291    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:34.036299    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:34.048634    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:34.048646    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:34.068429    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:34.068444    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:34.085246    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:34.085257    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:34.104223    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:34.104236    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:34.116580    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:34.116594    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:34.130375    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:34.130389    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:34.169685    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:34.169697    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:34.184416    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:34.184425    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:34.196745    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:34.196757    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:34.225953    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:34.225963    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:34.245632    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:34.245641    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:34.257859    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:34.257870    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:34.282817    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:34.282828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:34.326252    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:34.326267    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:34.341242    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:34.341255    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:34.353035    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:34.353050    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:33.919178    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:33.919316    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:33.931145    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:33.931245    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:33.942816    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:33.942876    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:33.954144    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:33.954229    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:33.970997    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:33.971058    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:33.982951    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:33.983014    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:33.994186    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:33.994268    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:34.004773    4045 logs.go:282] 0 containers: []
	W1009 12:45:34.004786    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:34.004860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:34.019518    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:34.019534    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:34.019541    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:34.061407    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:34.061419    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:34.076352    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:34.076371    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:34.091855    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:34.091872    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:34.108543    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:34.108555    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:34.122343    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:34.122356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:34.142550    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:34.142563    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:34.147120    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:34.147128    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:34.162985    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:34.163000    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:34.182355    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:34.182368    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:34.205513    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:34.205527    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:34.221944    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:34.221956    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:34.246346    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:34.246356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:34.288282    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:34.288292    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:34.300181    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:34.300194    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:34.313441    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:34.313452    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:36.872662    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:36.841002    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:41.875050    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:41.875156    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:41.887279    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:41.887369    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:41.898672    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:41.898765    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:41.912037    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:41.912117    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:41.922945    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:41.922987    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:41.934426    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:41.934461    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:41.945379    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:41.945462    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:41.956984    4056 logs.go:282] 0 containers: []
	W1009 12:45:41.956997    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:41.957071    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:41.968458    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:41.968476    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:41.968482    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:41.984083    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:41.984094    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:41.996697    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:41.996708    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:42.009269    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:42.009280    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:42.056231    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:42.056243    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:42.096504    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:42.096517    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:42.108423    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:42.108438    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:42.128611    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:42.128624    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:42.141325    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:42.141338    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:42.156816    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:42.156827    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:42.172251    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:42.172267    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:42.196536    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:42.196547    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:42.209667    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:42.209678    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:42.214741    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:42.214751    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:42.231587    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:42.231599    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:42.243805    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:42.243816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:42.262082    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:42.262093    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:42.273936    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:42.273947    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:44.787640    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:41.843494    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:41.843680    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:41.855978    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:41.856061    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:41.866858    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:41.866943    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:41.877433    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:41.877510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:41.891439    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:41.891516    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:41.904869    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:41.904953    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:41.922479    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:41.922566    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:41.934221    4045 logs.go:282] 0 containers: []
	W1009 12:45:41.934234    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:41.934307    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:41.946257    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:41.946274    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:41.946279    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:41.950815    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:41.950830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:41.966050    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:41.966068    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:41.994768    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:41.994790    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:42.009876    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:42.009885    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:42.024484    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:42.024497    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:42.036946    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:42.036957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:42.055860    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:42.055874    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:42.069829    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:42.069844    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:42.111707    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:42.111722    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:42.151932    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:42.151945    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:42.168388    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:42.168401    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:42.183381    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:42.183393    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:42.207132    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:42.207150    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:42.223310    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:42.223327    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:42.237324    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:42.237336    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:44.752101    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:49.788490    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:49.788591    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:49.800583    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:49.800628    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:49.812491    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:49.812536    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:49.824213    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:49.824265    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:49.835218    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:49.835272    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:49.846823    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:49.846900    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:49.860232    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:49.860315    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:49.871951    4056 logs.go:282] 0 containers: []
	W1009 12:45:49.871964    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:49.872034    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:49.883025    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:49.883040    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:49.883045    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:49.904843    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:49.904855    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:49.917087    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:49.917098    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:49.929405    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:49.929414    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:49.947787    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:49.947800    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:49.952848    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:49.952860    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:49.990855    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:49.990866    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:50.005394    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:50.005403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:50.020876    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:50.020887    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:50.064451    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:50.064472    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:50.082994    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:50.083011    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:50.108306    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:50.108323    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:50.122264    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:50.122277    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:50.137400    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:50.137412    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:50.149902    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:50.149918    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:50.161471    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:50.161482    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:50.176298    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:50.176307    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:50.191410    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:50.191421    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:49.754360    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:49.754550    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:49.766566    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:49.766644    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:49.778162    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:49.778247    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:49.788670    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:49.788715    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:49.800275    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:49.800359    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:49.811704    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:49.811781    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:49.822996    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:49.823073    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:49.834227    4045 logs.go:282] 0 containers: []
	W1009 12:45:49.834241    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:49.834311    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:49.845198    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:49.845219    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:49.845225    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:49.850093    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:49.850104    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:49.865732    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:49.865744    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:49.884181    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:49.884189    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:49.896513    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:49.896524    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:49.912172    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:49.912187    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:49.927589    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:49.927602    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:49.940477    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:49.940488    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:49.953762    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:49.953772    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:49.968375    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:49.968386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:49.984629    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:49.984645    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:50.008491    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:50.008502    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:50.048061    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:50.048070    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:50.086774    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:50.086787    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:50.113047    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:50.113062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:50.125376    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:50.125388    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:52.704063    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:52.645004    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:57.706179    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:57.706267    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:57.718090    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:45:57.718175    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:57.729722    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:45:57.729806    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:57.740812    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:45:57.740890    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:57.751667    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:45:57.751746    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:57.762963    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:45:57.763043    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:57.774978    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:45:57.775059    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:57.786677    4056 logs.go:282] 0 containers: []
	W1009 12:45:57.786690    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:57.786760    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:57.797696    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:45:57.797709    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:57.797714    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:57.835932    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:45:57.835944    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:45:57.855731    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:57.855739    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:57.860253    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:45:57.860264    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:45:57.871968    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:45:57.871976    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:45:57.883891    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:45:57.883903    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:45:57.902370    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:45:57.902385    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:45:57.915420    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:45:57.915431    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:57.928660    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:45:57.928669    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:45:57.943414    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:45:57.943424    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:45:57.958991    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:45:57.959002    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:45:57.971097    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:45:57.971109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:45:57.987064    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:45:57.987075    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:45:57.999621    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:57.999633    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:58.022535    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:58.022543    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:58.064268    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:45:58.064280    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:45:58.078628    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:45:58.078638    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:45:58.091081    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:45:58.091091    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:45:57.647200    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:57.647312    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:57.659183    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:57.659263    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:57.669866    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:57.669950    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:57.680701    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:57.680786    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:57.691518    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:57.691596    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:57.701743    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:57.701823    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:57.713427    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:57.713512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:57.724357    4045 logs.go:282] 0 containers: []
	W1009 12:45:57.724371    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:57.724443    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:57.735734    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:57.735750    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:57.735756    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:57.751304    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:57.751316    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:57.770537    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:57.770550    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:57.795741    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:57.795759    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:57.800943    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:57.800952    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:57.813825    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:57.813836    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:57.839724    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:57.839738    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:57.854318    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:57.854330    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:57.870683    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:57.870701    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:57.911935    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:57.911952    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:57.927298    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:57.927312    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:57.940835    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:57.940848    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:57.955262    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:57.955276    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:57.968226    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:57.968237    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:57.982445    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:57.982457    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:58.022029    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:58.022039    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:00.535875    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:00.607388    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:05.538235    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:05.538834    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:05.586848    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:05.587007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:05.607015    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:05.607131    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:05.622737    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:05.622824    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:05.635812    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:05.635899    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:05.650496    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:05.650574    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:05.663267    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:05.663351    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:05.676109    4045 logs.go:282] 0 containers: []
	W1009 12:46:05.676121    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:05.676192    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:05.687855    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:05.687874    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:05.687880    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:05.714235    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:05.714252    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:05.729945    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:05.729957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:05.744153    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:05.744163    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:05.786305    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:05.786320    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:05.791071    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:05.791079    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:05.831078    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:05.831090    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:05.845490    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:05.845506    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:05.861788    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:05.861797    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:05.880894    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:05.880906    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:05.906243    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:05.906255    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:05.919951    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:05.919966    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:05.932937    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:05.932950    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:05.948777    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:05.948789    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:05.962016    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:05.962030    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:05.977885    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:05.977900    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:05.609724    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:05.609817    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:05.624255    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:05.624308    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:05.636254    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:05.636298    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:05.652119    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:05.652176    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:05.668391    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:05.668477    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:05.680133    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:05.680218    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:05.692199    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:05.692283    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:05.707808    4056 logs.go:282] 0 containers: []
	W1009 12:46:05.707819    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:05.707888    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:05.719499    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:05.719516    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:05.719521    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:05.763887    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:05.763899    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:05.778145    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:05.778155    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:05.790812    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:05.790825    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:05.811263    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:05.811275    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:05.823989    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:05.824006    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:05.861636    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:05.861649    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:05.874391    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:05.874408    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:05.890302    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:05.890315    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:05.903135    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:05.903147    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:05.915667    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:05.915680    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:05.920536    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:05.920546    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:05.935725    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:05.935739    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:05.951030    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:05.951041    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:05.966817    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:05.966830    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:05.978915    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:05.978925    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:05.991670    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:05.991680    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:06.009721    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:06.009732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:08.535155    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:08.490689    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:13.537528    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:13.537705    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:13.555951    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:13.556046    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:13.569870    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:13.569953    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:13.582607    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:13.582684    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:13.594460    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:13.594529    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:13.605918    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:13.605989    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:13.619900    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:13.619986    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:13.631166    4056 logs.go:282] 0 containers: []
	W1009 12:46:13.631176    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:13.631258    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:13.644305    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:13.644320    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:13.644326    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:13.690743    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:13.690762    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:13.730906    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:13.730923    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:13.746068    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:13.746079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:13.758225    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:13.758238    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:13.773243    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:13.773260    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:13.786023    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:13.786034    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:13.804364    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:13.804375    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:13.824634    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:13.824649    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:13.829650    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:13.829657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:13.842486    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:13.842497    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:13.866399    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:13.866416    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:13.880668    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:13.880680    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:13.893250    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:13.893261    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:13.909201    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:13.909213    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:13.924698    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:13.924708    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:13.942922    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:13.942935    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:13.954662    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:13.954672    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:13.493104    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:13.493607    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:13.529731    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:13.529889    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:13.550287    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:13.550406    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:13.565595    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:13.565687    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:13.578860    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:13.578949    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:13.591960    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:13.592119    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:13.604033    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:13.604115    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:13.615411    4045 logs.go:282] 0 containers: []
	W1009 12:46:13.615424    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:13.615497    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:13.627518    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:13.627543    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:13.627550    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:13.665302    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:13.665314    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:13.680030    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:13.680040    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:13.695750    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:13.695764    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:13.700505    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:13.700516    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:13.713332    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:13.713344    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:13.726283    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:13.726295    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:13.752572    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:13.752584    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:13.768417    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:13.768431    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:13.786692    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:13.786701    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:13.811680    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:13.811693    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:13.832701    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:13.832712    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:13.845794    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:13.845806    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:13.867102    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:13.867111    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:13.881332    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:13.881341    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:13.894463    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:13.894472    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:16.438535    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:16.472053    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:21.441219    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:21.441771    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:21.485198    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:21.485362    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:21.506236    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:21.506359    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:21.522198    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:21.522288    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:21.535642    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:21.535727    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:21.547434    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:21.547510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:21.560044    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:21.560128    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:21.571812    4045 logs.go:282] 0 containers: []
	W1009 12:46:21.571824    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:21.571894    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:21.583990    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:21.584006    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:21.584011    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:21.597380    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:21.597392    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:21.620939    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:21.620959    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:21.637116    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:21.637132    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:21.651566    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:21.651575    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:21.666490    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:21.666501    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:21.679120    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:21.679134    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:21.692274    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:21.692289    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:21.474440    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:21.474757    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:21.499463    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:21.499574    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:21.516421    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:21.516511    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:21.529745    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:21.529824    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:21.541822    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:21.541907    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:21.553405    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:21.553486    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:21.566552    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:21.566637    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:21.577714    4056 logs.go:282] 0 containers: []
	W1009 12:46:21.577725    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:21.577801    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:21.592297    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:21.592311    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:21.592317    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:21.616382    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:21.616394    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:21.628799    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:21.628812    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:21.649007    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:21.649020    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:21.669008    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:21.669018    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:21.681866    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:21.681876    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:21.686980    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:21.686992    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:21.699678    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:21.699692    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:21.713125    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:21.713136    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:21.739123    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:21.739143    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:21.772221    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:21.772240    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:21.824403    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:21.824415    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:21.836481    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:21.836494    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:21.851822    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:21.851834    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:21.874796    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:21.874807    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:21.888846    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:21.888859    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:21.905721    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:21.905738    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:21.917451    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:21.917463    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:24.460670    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:21.733872    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:21.733894    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:21.791517    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:21.791533    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:21.808098    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:21.808115    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:21.827425    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:21.827435    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:21.847011    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:21.847024    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:21.859486    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:21.859499    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:21.864415    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:21.864424    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:21.880349    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:21.880360    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:24.408910    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:29.460909    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:29.460984    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:29.476397    4056 logs.go:282] 2 containers: [2ed907d795ce 60b710e3ac8d]
	I1009 12:46:29.476491    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:29.488591    4056 logs.go:282] 2 containers: [16844be32e26 21acea369545]
	I1009 12:46:29.488676    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:29.500625    4056 logs.go:282] 1 containers: [2c9d954bac62]
	I1009 12:46:29.500705    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:29.512688    4056 logs.go:282] 2 containers: [fa97d7ee7da6 301a37b51d64]
	I1009 12:46:29.512777    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:29.524078    4056 logs.go:282] 2 containers: [c4dd4d079dff ec3f65181026]
	I1009 12:46:29.524158    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:29.535393    4056 logs.go:282] 2 containers: [1176b15d7e25 6c7a674ad960]
	I1009 12:46:29.535470    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:29.546646    4056 logs.go:282] 0 containers: []
	W1009 12:46:29.546667    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:29.546738    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:29.557853    4056 logs.go:282] 2 containers: [ff43aab002d3 6ad25cea7b79]
	I1009 12:46:29.557870    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:29.557876    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:29.581237    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:29.581247    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:29.625444    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:29.625465    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:29.630591    4056 logs.go:123] Gathering logs for kube-apiserver [2ed907d795ce] ...
	I1009 12:46:29.630604    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed907d795ce"
	I1009 12:46:29.645463    4056 logs.go:123] Gathering logs for coredns [2c9d954bac62] ...
	I1009 12:46:29.645474    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9d954bac62"
	I1009 12:46:29.660280    4056 logs.go:123] Gathering logs for storage-provisioner [6ad25cea7b79] ...
	I1009 12:46:29.660293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ad25cea7b79"
	I1009 12:46:29.672546    4056 logs.go:123] Gathering logs for kube-scheduler [fa97d7ee7da6] ...
	I1009 12:46:29.672559    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa97d7ee7da6"
	I1009 12:46:29.685196    4056 logs.go:123] Gathering logs for kube-proxy [c4dd4d079dff] ...
	I1009 12:46:29.685208    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4dd4d079dff"
	I1009 12:46:29.698450    4056 logs.go:123] Gathering logs for storage-provisioner [ff43aab002d3] ...
	I1009 12:46:29.698464    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff43aab002d3"
	I1009 12:46:29.710509    4056 logs.go:123] Gathering logs for kube-controller-manager [1176b15d7e25] ...
	I1009 12:46:29.710520    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1176b15d7e25"
	I1009 12:46:29.728462    4056 logs.go:123] Gathering logs for kube-controller-manager [6c7a674ad960] ...
	I1009 12:46:29.728479    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c7a674ad960"
	I1009 12:46:29.746715    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:46:29.746732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:29.759595    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:29.759607    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:29.797174    4056 logs.go:123] Gathering logs for etcd [16844be32e26] ...
	I1009 12:46:29.797182    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16844be32e26"
	I1009 12:46:29.815742    4056 logs.go:123] Gathering logs for etcd [21acea369545] ...
	I1009 12:46:29.815755    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21acea369545"
	I1009 12:46:29.834645    4056 logs.go:123] Gathering logs for kube-scheduler [301a37b51d64] ...
	I1009 12:46:29.834657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301a37b51d64"
	I1009 12:46:29.855263    4056 logs.go:123] Gathering logs for kube-proxy [ec3f65181026] ...
	I1009 12:46:29.855277    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec3f65181026"
	I1009 12:46:29.866525    4056 logs.go:123] Gathering logs for kube-apiserver [60b710e3ac8d] ...
	I1009 12:46:29.866537    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60b710e3ac8d"
	I1009 12:46:29.411139    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:29.411633    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:29.441667    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:29.441815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:29.460687    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:29.460787    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:29.478603    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:29.478681    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:29.490308    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:29.490375    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:29.501692    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:29.501743    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:29.513035    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:29.513079    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:29.524555    4045 logs.go:282] 0 containers: []
	W1009 12:46:29.524564    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:29.524600    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:29.536529    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:29.536544    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:29.536549    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:29.576518    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:29.576529    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:29.595212    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:29.595223    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:29.613956    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:29.613972    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:29.626984    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:29.626994    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:29.639559    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:29.639571    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:29.666155    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:29.666171    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:29.681884    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:29.681900    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:29.694950    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:29.694964    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:29.718779    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:29.718792    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:29.756968    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:29.756982    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:29.773379    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:29.773395    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:29.796743    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:29.796759    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:29.812245    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:29.812258    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:29.830157    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:29.830169    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:29.842719    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:29.842732    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:32.380041    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:32.348470    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:37.351059    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:37.351142    4045 kubeadm.go:597] duration metric: took 4m4.666133542s to restartPrimaryControlPlane
	W1009 12:46:37.351208    4045 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 12:46:37.351239    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1009 12:46:38.502871    4045 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.151652583s)
	I1009 12:46:38.503399    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 12:46:38.509135    4045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:46:38.512199    4045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:46:38.515511    4045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:46:38.515518    4045 kubeadm.go:157] found existing configuration files:
	
	I1009 12:46:38.515572    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf
	I1009 12:46:38.518723    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:46:38.518767    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:46:38.521711    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf
	I1009 12:46:38.524389    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:46:38.524430    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:46:38.527651    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.530886    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:46:38.530929    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.534524    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf
	I1009 12:46:38.537627    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:46:38.537668    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:46:38.540538    4045 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 12:46:38.558873    4045 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1009 12:46:38.558987    4045 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 12:46:38.615815    4045 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 12:46:38.615964    4045 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 12:46:38.616114    4045 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 12:46:38.671944    4045 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 12:46:37.382395    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:37.382487    4056 kubeadm.go:597] duration metric: took 4m5.154483542s to restartPrimaryControlPlane
	W1009 12:46:37.382547    4056 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 12:46:37.382577    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1009 12:46:38.511080    4056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.128522959s)
	I1009 12:46:38.511130    4056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 12:46:38.516455    4056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:46:38.519886    4056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:46:38.522828    4056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:46:38.522834    4056 kubeadm.go:157] found existing configuration files:
	
	I1009 12:46:38.522869    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf
	I1009 12:46:38.525700    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:46:38.525733    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:46:38.529153    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf
	I1009 12:46:38.532315    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:46:38.532354    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:46:38.535159    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.537927    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:46:38.537954    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.541244    4056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf
	I1009 12:46:38.544382    4056 kubeadm.go:163] "https://control-plane.minikube.internal:53775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53775 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:46:38.544431    4056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:46:38.547353    4056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 12:46:38.564407    4056 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1009 12:46:38.564591    4056 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 12:46:38.617587    4056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 12:46:38.617641    4056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 12:46:38.617698    4056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 12:46:38.673987    4056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 12:46:38.676173    4045 out.go:235]   - Generating certificates and keys ...
	I1009 12:46:38.676213    4045 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 12:46:38.676263    4045 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 12:46:38.676327    4045 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 12:46:38.676370    4045 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 12:46:38.676512    4045 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 12:46:38.676542    4045 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 12:46:38.676579    4045 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 12:46:38.676614    4045 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 12:46:38.676653    4045 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 12:46:38.676704    4045 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 12:46:38.676727    4045 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 12:46:38.676760    4045 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 12:46:38.803980    4045 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 12:46:38.901546    4045 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 12:46:39.007628    4045 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 12:46:39.060519    4045 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 12:46:39.091960    4045 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 12:46:39.092360    4045 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 12:46:39.092440    4045 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 12:46:39.178147    4045 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 12:46:38.683088    4056 out.go:235]   - Generating certificates and keys ...
	I1009 12:46:38.683169    4056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 12:46:38.683265    4056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 12:46:38.683383    4056 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 12:46:38.683471    4056 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 12:46:38.683565    4056 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 12:46:38.683602    4056 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 12:46:38.683642    4056 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 12:46:38.683678    4056 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 12:46:38.683777    4056 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 12:46:38.683822    4056 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 12:46:38.683863    4056 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 12:46:38.683898    4056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 12:46:38.842235    4056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 12:46:39.158174    4056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 12:46:39.269993    4056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 12:46:39.353931    4056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 12:46:39.385941    4056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 12:46:39.386329    4056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 12:46:39.386457    4056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 12:46:39.478883    4056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 12:46:39.482719    4056 out.go:235]   - Booting up control plane ...
	I1009 12:46:39.482834    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 12:46:39.482931    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 12:46:39.487369    4056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 12:46:39.487630    4056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 12:46:39.488506    4056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 12:46:39.182176    4045 out.go:235]   - Booting up control plane ...
	I1009 12:46:39.182233    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 12:46:39.182389    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 12:46:39.182522    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 12:46:39.183101    4045 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 12:46:39.184958    4045 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 12:46:43.991806    4056 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503329 seconds
	I1009 12:46:43.991910    4056 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 12:46:43.995870    4056 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 12:46:44.505032    4056 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 12:46:44.505134    4056 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-763000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 12:46:45.009117    4056 kubeadm.go:310] [bootstrap-token] Using token: o94btb.71bdwp2j2jh2bto7
	I1009 12:46:44.192189    4045 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007811 seconds
	I1009 12:46:44.192390    4045 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 12:46:44.199773    4045 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 12:46:44.709113    4045 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 12:46:44.709228    4045 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-220000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 12:46:45.214212    4045 kubeadm.go:310] [bootstrap-token] Using token: peo2jx.ukob1vaa9j8bqbc9
	I1009 12:46:45.014477    4056 out.go:235]   - Configuring RBAC rules ...
	I1009 12:46:45.014553    4056 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 12:46:45.014605    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 12:46:45.016879    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 12:46:45.017952    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 12:46:45.018823    4056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 12:46:45.019799    4056 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 12:46:45.022755    4056 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 12:46:45.226027    4056 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 12:46:45.413317    4056 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 12:46:45.413691    4056 kubeadm.go:310] 
	I1009 12:46:45.413727    4056 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 12:46:45.413734    4056 kubeadm.go:310] 
	I1009 12:46:45.413779    4056 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 12:46:45.413812    4056 kubeadm.go:310] 
	I1009 12:46:45.413832    4056 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 12:46:45.413858    4056 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 12:46:45.413883    4056 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 12:46:45.413886    4056 kubeadm.go:310] 
	I1009 12:46:45.413911    4056 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 12:46:45.413960    4056 kubeadm.go:310] 
	I1009 12:46:45.414064    4056 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 12:46:45.414072    4056 kubeadm.go:310] 
	I1009 12:46:45.414098    4056 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 12:46:45.414168    4056 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 12:46:45.414202    4056 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 12:46:45.414204    4056 kubeadm.go:310] 
	I1009 12:46:45.414246    4056 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 12:46:45.414292    4056 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 12:46:45.414295    4056 kubeadm.go:310] 
	I1009 12:46:45.414335    4056 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o94btb.71bdwp2j2jh2bto7 \
	I1009 12:46:45.414395    4056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e \
	I1009 12:46:45.414406    4056 kubeadm.go:310] 	--control-plane 
	I1009 12:46:45.414409    4056 kubeadm.go:310] 
	I1009 12:46:45.414446    4056 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 12:46:45.414450    4056 kubeadm.go:310] 
	I1009 12:46:45.414497    4056 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o94btb.71bdwp2j2jh2bto7 \
	I1009 12:46:45.414557    4056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e 
	I1009 12:46:45.414733    4056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 12:46:45.414743    4056 cni.go:84] Creating CNI manager for ""
	I1009 12:46:45.414752    4056 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:46:45.419317    4056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 12:46:45.426541    4056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 12:46:45.430048    4056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 12:46:45.435682    4056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 12:46:45.435778    4056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 12:46:45.435817    4056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-763000 minikube.k8s.io/updated_at=2024_10_09T12_46_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=running-upgrade-763000 minikube.k8s.io/primary=true
	I1009 12:46:45.483664    4056 kubeadm.go:1113] duration metric: took 47.963333ms to wait for elevateKubeSystemPrivileges
	I1009 12:46:45.483674    4056 ops.go:34] apiserver oom_adj: -16
	I1009 12:46:45.483683    4056 kubeadm.go:394] duration metric: took 4m13.271076583s to StartCluster
	I1009 12:46:45.483695    4056 settings.go:142] acquiring lock: {Name:mk60ce4ac2055fafaa579c122d2ddfc9feae1fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.483797    4056 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:46:45.484214    4056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.484705    4056 config.go:182] Loaded profile config "running-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:46:45.484870    4056 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:46:45.485210    4056 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 12:46:45.485356    4056 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-763000"
	I1009 12:46:45.485364    4056 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-763000"
	W1009 12:46:45.485368    4056 addons.go:243] addon storage-provisioner should already be in state true
	I1009 12:46:45.485380    4056 host.go:66] Checking if "running-upgrade-763000" exists ...
	I1009 12:46:45.485363    4056 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-763000"
	I1009 12:46:45.485445    4056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-763000"
	I1009 12:46:45.486543    4056 kapi.go:59] client config for running-upgrade-763000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/running-upgrade-763000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10233c0f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:46:45.486687    4056 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-763000"
	W1009 12:46:45.486692    4056 addons.go:243] addon default-storageclass should already be in state true
	I1009 12:46:45.486704    4056 host.go:66] Checking if "running-upgrade-763000" exists ...
	I1009 12:46:45.488291    4056 out.go:177] * Verifying Kubernetes components...
	I1009 12:46:45.488808    4056 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.492349    4056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 12:46:45.492364    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:46:45.496231    4056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:46:45.218111    4045 out.go:235]   - Configuring RBAC rules ...
	I1009 12:46:45.218173    4045 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 12:46:45.218222    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 12:46:45.225494    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 12:46:45.227105    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 12:46:45.228405    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 12:46:45.229899    4045 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 12:46:45.235170    4045 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 12:46:45.439861    4045 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 12:46:45.621393    4045 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 12:46:45.621727    4045 kubeadm.go:310] 
	I1009 12:46:45.621829    4045 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 12:46:45.621846    4045 kubeadm.go:310] 
	I1009 12:46:45.621960    4045 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 12:46:45.621967    4045 kubeadm.go:310] 
	I1009 12:46:45.621980    4045 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 12:46:45.622022    4045 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 12:46:45.622075    4045 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 12:46:45.622082    4045 kubeadm.go:310] 
	I1009 12:46:45.622115    4045 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 12:46:45.622120    4045 kubeadm.go:310] 
	I1009 12:46:45.622145    4045 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 12:46:45.622150    4045 kubeadm.go:310] 
	I1009 12:46:45.622178    4045 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 12:46:45.622219    4045 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 12:46:45.622259    4045 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 12:46:45.622261    4045 kubeadm.go:310] 
	I1009 12:46:45.622366    4045 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 12:46:45.622440    4045 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 12:46:45.622445    4045 kubeadm.go:310] 
	I1009 12:46:45.622485    4045 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token peo2jx.ukob1vaa9j8bqbc9 \
	I1009 12:46:45.622545    4045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e \
	I1009 12:46:45.622560    4045 kubeadm.go:310] 	--control-plane 
	I1009 12:46:45.622588    4045 kubeadm.go:310] 
	I1009 12:46:45.622634    4045 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 12:46:45.622641    4045 kubeadm.go:310] 
	I1009 12:46:45.622683    4045 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token peo2jx.ukob1vaa9j8bqbc9 \
	I1009 12:46:45.622738    4045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e 
	I1009 12:46:45.622802    4045 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 12:46:45.622816    4045 cni.go:84] Creating CNI manager for ""
	I1009 12:46:45.623070    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:46:45.626323    4045 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 12:46:45.634235    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 12:46:45.637475    4045 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 12:46:45.642877    4045 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 12:46:45.642988    4045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-220000 minikube.k8s.io/updated_at=2024_10_09T12_46_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=stopped-upgrade-220000 minikube.k8s.io/primary=true
	I1009 12:46:45.643060    4045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 12:46:45.696864    4045 ops.go:34] apiserver oom_adj: -16
	I1009 12:46:45.696920    4045 kubeadm.go:1113] duration metric: took 53.972292ms to wait for elevateKubeSystemPrivileges
	I1009 12:46:45.696936    4045 kubeadm.go:394] duration metric: took 4m13.026165709s to StartCluster
	I1009 12:46:45.696949    4045 settings.go:142] acquiring lock: {Name:mk60ce4ac2055fafaa579c122d2ddfc9feae1fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.697036    4045 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:46:45.697440    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.697640    4045 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:46:45.697711    4045 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 12:46:45.697755    4045 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-220000"
	I1009 12:46:45.697763    4045 config.go:182] Loaded profile config "stopped-upgrade-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:46:45.697764    4045 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-220000"
	I1009 12:46:45.697777    4045 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-220000"
	W1009 12:46:45.697782    4045 addons.go:243] addon storage-provisioner should already be in state true
	I1009 12:46:45.697815    4045 host.go:66] Checking if "stopped-upgrade-220000" exists ...
	I1009 12:46:45.697831    4045 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-220000"
	I1009 12:46:45.698258    4045 retry.go:31] will retry after 723.618154ms: connect: dial unix /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/monitor: connect: connection refused
	I1009 12:46:45.698989    4045 kapi.go:59] client config for stopped-upgrade-220000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027600f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:46:45.699124    4045 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-220000"
	W1009 12:46:45.699129    4045 addons.go:243] addon default-storageclass should already be in state true
	I1009 12:46:45.699140    4045 host.go:66] Checking if "stopped-upgrade-220000" exists ...
	I1009 12:46:45.699748    4045 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.699754    4045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 12:46:45.699760    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:46:45.701256    4045 out.go:177] * Verifying Kubernetes components...
	I1009 12:46:45.708262    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:46:45.798192    4045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:46:45.804123    4045 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:46:45.804186    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:46:45.808717    4045 api_server.go:72] duration metric: took 111.068584ms to wait for apiserver process to appear ...
	I1009 12:46:45.808727    4045 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:46:45.808736    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:45.828711    4045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:46.151615    4045 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 12:46:46.151627    4045 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 12:46:46.425756    4045 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:46:46.429674    4045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:46.429681    4045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 12:46:46.429688    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:46:46.461584    4045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:45.500386    4056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:46:45.504285    4056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:45.504294    4056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 12:46:45.504303    4056 sshutil.go:53] new ssh client: &{IP:localhost Port:53683 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/running-upgrade-763000/id_rsa Username:docker}
	I1009 12:46:45.596806    4056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:46:45.603232    4056 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:46:45.603311    4056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:46:45.607803    4056 api_server.go:72] duration metric: took 122.92075ms to wait for apiserver process to appear ...
	I1009 12:46:45.607814    4056 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:46:45.607824    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:45.634308    4056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.649397    4056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:45.970308    4056 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 12:46:45.970320    4056 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 12:46:50.810650    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:50.810695    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:50.608705    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:50.608727    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:55.810765    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:55.810815    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:55.609616    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:55.609652    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:00.811138    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:00.811161    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:00.609788    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:00.609810    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:05.811910    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:05.811945    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:05.610026    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:05.610093    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:10.812573    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:10.812608    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:10.610422    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:10.610461    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:15.611401    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:15.611441    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1009 12:47:15.972656    4056 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1009 12:47:15.976836    4056 out.go:177] * Enabled addons: storage-provisioner
	I1009 12:47:15.813357    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:15.813383    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1009 12:47:16.153075    4045 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1009 12:47:16.157078    4045 out.go:177] * Enabled addons: storage-provisioner
	I1009 12:47:16.165241    4045 addons.go:510] duration metric: took 30.468458625s for enable addons: enabled=[storage-provisioner]
	I1009 12:47:15.988525    4056 addons.go:510] duration metric: took 30.504522542s for enable addons: enabled=[storage-provisioner]
	I1009 12:47:20.814313    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:20.814333    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:20.612256    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:20.612331    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:25.815591    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:25.815627    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:25.613583    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:25.613609    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:30.817258    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:30.817306    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:30.614933    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:30.614978    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:35.819351    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:35.819375    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:35.616807    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:35.616844    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:40.821438    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:40.821472    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:40.616935    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:40.616965    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:45.821683    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:45.821772    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:45.834442    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:47:45.834530    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:45.846186    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:47:45.846271    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:45.857405    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:47:45.857491    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:45.868135    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:47:45.868214    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:45.880755    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:47:45.880838    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:45.892786    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:47:45.892865    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:45.904347    4045 logs.go:282] 0 containers: []
	W1009 12:47:45.904358    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:45.904429    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:45.915731    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:47:45.915747    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:47:45.915753    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:47:45.938098    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:47:45.938115    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:47:45.951712    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:47:45.951724    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:47:45.967702    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:47:45.967719    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:47:45.986024    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:45.986037    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:46.023069    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:47:46.023080    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:47:46.038371    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:47:46.038383    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:47:46.054551    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:47:46.054561    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:47:46.069320    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:47:46.069330    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:47:46.080589    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:46.080597    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:46.104922    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:47:46.104931    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:46.116653    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:46.116663    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:46.152306    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:46.152315    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:45.619055    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:45.619186    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:45.640930    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:47:45.641021    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:45.659849    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:47:45.659942    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:45.671804    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:47:45.671894    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:45.682164    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:47:45.682240    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:45.693085    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:47:45.693169    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:45.710470    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:47:45.710549    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:45.721082    4056 logs.go:282] 0 containers: []
	W1009 12:47:45.721096    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:45.721153    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:45.736547    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:47:45.736566    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:47:45.736571    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:47:45.751082    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:47:45.751091    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:47:45.762104    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:47:45.762115    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:47:45.773533    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:47:45.773544    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:47:45.791956    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:47:45.791970    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:47:45.809040    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:47:45.809057    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:47:45.821582    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:45.821594    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:45.847050    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:45.847064    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:45.883358    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:45.883369    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:45.888462    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:45.888478    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:45.927096    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:47:45.927109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:47:45.942753    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:47:45.942765    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:47:45.955915    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:47:45.955928    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:48.470412    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:48.658512    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:53.473073    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:53.473525    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:53.503756    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:47:53.503902    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:53.522085    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:47:53.522198    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:53.536167    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:47:53.536265    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:53.547875    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:47:53.547954    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:53.562321    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:47:53.562402    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:53.572569    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:47:53.572636    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:53.583887    4056 logs.go:282] 0 containers: []
	W1009 12:47:53.583900    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:53.583957    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:53.594598    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:47:53.594614    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:47:53.594619    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:47:53.608774    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:47:53.608785    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:47:53.623120    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:47:53.623131    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:47:53.636098    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:47:53.636109    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:47:53.648432    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:47:53.648443    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:47:53.662073    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:47:53.662082    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:47:53.675287    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:53.675298    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:53.713570    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:53.713584    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:53.718674    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:47:53.718690    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:47:53.734519    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:47:53.734529    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:47:53.759326    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:53.759335    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:53.786352    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:47:53.786363    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:53.799075    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:53.799089    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:53.660921    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:53.661022    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:53.672428    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:47:53.672516    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:53.683981    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:47:53.684058    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:53.694862    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:47:53.694941    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:53.708292    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:47:53.708377    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:53.719522    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:47:53.719601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:53.730935    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:47:53.731024    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:53.742104    4045 logs.go:282] 0 containers: []
	W1009 12:47:53.742119    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:53.742190    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:53.754221    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:47:53.754237    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:53.754243    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:53.758978    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:47:53.758986    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:47:53.774286    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:47:53.774297    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:47:53.786104    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:47:53.786117    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:47:53.798392    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:47:53.798404    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:47:53.817838    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:47:53.817849    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:53.830745    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:53.830757    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:53.864781    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:53.864790    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:53.898850    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:47:53.898861    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:47:53.914862    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:47:53.914872    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:47:53.927073    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:47:53.927084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:47:53.943036    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:47:53.943050    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:47:53.954714    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:53.954723    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:56.480562    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:56.337879    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:01.481598    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:01.481687    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:01.493306    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:01.493387    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:01.504435    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:01.504515    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:01.516178    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:01.516261    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:01.529344    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:01.529423    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:01.540604    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:01.540690    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:01.556654    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:01.556739    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:01.567074    4045 logs.go:282] 0 containers: []
	W1009 12:48:01.567084    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:01.567152    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:01.578528    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:01.578542    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:01.578547    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:01.590333    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:01.590345    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:01.609384    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:01.609393    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:01.636064    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:01.636081    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:01.648356    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:01.648370    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:01.686380    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:01.686400    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:01.691401    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:01.691415    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:01.340086    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:01.340373    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:01.363198    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:01.363308    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:01.378653    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:01.378746    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:01.391261    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:01.391337    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:01.402178    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:01.402263    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:01.412594    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:01.412668    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:01.422848    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:01.422932    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:01.433023    4056 logs.go:282] 0 containers: []
	W1009 12:48:01.433041    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:01.433114    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:01.443037    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:01.443053    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:01.443058    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:01.478125    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:01.478134    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:01.515841    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:01.515858    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:01.528060    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:01.528073    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:01.541193    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:01.541204    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:01.565101    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:01.565114    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:01.590978    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:01.590992    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:01.603585    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:01.603596    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:01.608968    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:01.608978    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:01.623857    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:01.623872    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:01.639085    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:01.639095    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:01.653964    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:01.653977    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:01.677893    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:01.677904    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:04.191947    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:01.705844    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:01.705855    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:01.719784    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:01.719798    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:01.732985    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:01.732999    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:01.766935    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:01.766947    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:01.778625    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:01.778639    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:01.794081    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:01.794092    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:04.308556    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:09.194156    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:09.194433    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:09.215705    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:09.215809    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:09.230986    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:09.231077    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:09.244097    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:09.244179    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:09.254853    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:09.254936    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:09.265512    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:09.265597    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:09.276188    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:09.276269    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:09.286544    4056 logs.go:282] 0 containers: []
	W1009 12:48:09.286556    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:09.286625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:09.298016    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:09.298030    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:09.298036    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:09.312125    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:09.312137    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:09.325332    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:09.325345    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:09.350557    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:09.350569    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:09.363344    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:09.363357    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:09.401625    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:09.401636    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:09.447984    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:09.447998    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:09.475343    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:09.475361    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:09.494389    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:09.494400    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:09.521955    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:09.521981    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:09.527416    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:09.527434    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:09.542146    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:09.542158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:09.561882    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:09.561899    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:09.308734    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:09.308841    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:09.319953    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:09.320034    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:09.330908    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:09.330994    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:09.342084    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:09.342176    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:09.354277    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:09.354362    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:09.365428    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:09.365512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:09.387403    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:09.387484    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:09.398244    4045 logs.go:282] 0 containers: []
	W1009 12:48:09.398255    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:09.398322    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:09.410329    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:09.410346    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:09.410352    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:09.436636    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:09.436657    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:09.458411    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:09.458432    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:09.478049    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:09.478065    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:09.497207    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:09.497216    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:09.509828    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:09.509838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:09.522427    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:09.522437    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:09.538129    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:09.538141    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:09.551340    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:09.551352    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:09.564350    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:09.564359    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:09.601196    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:09.601207    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:09.606505    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:09.606512    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:09.640861    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:09.640873    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:12.077875    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:12.158510    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:17.080017    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:17.080172    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:17.096437    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:17.096528    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:17.109064    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:17.109146    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:17.128605    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:17.128690    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:17.139317    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:17.139392    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:17.154406    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:17.154486    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:17.165403    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:17.165479    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:17.181732    4056 logs.go:282] 0 containers: []
	W1009 12:48:17.181746    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:17.181816    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:17.193368    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:17.193384    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:17.193389    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:17.206366    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:17.206378    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:17.218779    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:17.218791    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:17.238431    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:17.238447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:17.251458    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:17.251470    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:17.270702    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:17.270715    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:17.307428    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:17.307452    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:17.387448    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:17.387462    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:17.400637    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:17.400649    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:17.416874    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:17.416888    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:17.443790    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:17.443801    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:17.449348    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:17.449356    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:17.465351    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:17.465359    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:19.982008    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:17.160558    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:17.160640    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:17.172486    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:17.172572    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:17.188121    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:17.188206    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:17.200061    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:17.200140    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:17.211250    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:17.211325    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:17.222453    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:17.222536    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:17.234166    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:17.234246    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:17.248241    4045 logs.go:282] 0 containers: []
	W1009 12:48:17.248254    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:17.248323    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:17.264429    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:17.264445    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:17.264450    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:17.284654    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:17.284666    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:17.297753    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:17.297765    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:17.316388    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:17.316401    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:17.354654    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:17.354671    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:17.359406    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:17.359416    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:17.396574    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:17.396591    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:17.411475    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:17.411488    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:17.424135    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:17.424146    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:17.436375    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:17.436385    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:17.462945    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:17.462956    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:17.476080    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:17.476092    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:17.491391    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:17.491406    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:20.008802    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:24.984208    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:24.984427    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:24.998974    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:24.999064    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:25.018421    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:25.018491    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:25.029825    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:25.029899    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:25.041068    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:25.041156    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:25.053093    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:25.053168    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:25.064692    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:25.064767    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:25.082789    4056 logs.go:282] 0 containers: []
	W1009 12:48:25.082799    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:25.082832    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:25.094292    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:25.094306    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:25.094312    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:25.111006    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:25.111015    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:25.115834    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:25.115844    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:25.153805    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:25.153816    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:25.169015    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:25.169027    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:25.184645    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:25.184655    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:25.199473    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:25.199486    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:25.215024    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:25.215038    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:25.244266    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:25.244282    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:25.280469    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:25.280487    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:25.293199    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:25.293210    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:25.308988    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:25.309000    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:25.321477    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:25.321489    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:25.010850    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:25.010951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:25.022244    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:25.022324    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:25.033765    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:25.033911    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:25.047001    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:25.047075    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:25.058679    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:25.058755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:25.069879    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:25.069958    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:25.082478    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:25.082554    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:25.095803    4045 logs.go:282] 0 containers: []
	W1009 12:48:25.095814    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:25.095881    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:25.110889    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:25.110905    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:25.110910    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:25.136775    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:25.136790    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:25.150402    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:25.150415    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:25.188635    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:25.188646    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:25.227196    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:25.227213    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:25.239668    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:25.239681    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:25.256639    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:25.256653    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:25.268346    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:25.268356    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:25.286184    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:25.286195    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:25.291008    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:25.291018    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:25.306626    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:25.306640    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:25.321721    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:25.321729    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:25.335578    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:25.335588    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:27.849442    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:27.849609    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:32.851755    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:32.851996    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:32.875468    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:32.875541    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:32.892303    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:32.892347    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:32.906452    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:32.906492    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:32.918344    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:32.918387    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:32.929840    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:32.929882    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:32.945648    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:32.945724    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:32.956923    4056 logs.go:282] 0 containers: []
	W1009 12:48:32.956935    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:32.957009    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:32.968657    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:32.968670    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:32.968675    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:32.981281    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:32.981293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:32.999258    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:32.999275    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:33.014121    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:33.014136    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:33.046705    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:33.046732    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:33.061167    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:33.061178    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:33.066444    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:33.066454    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:33.106698    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:33.106709    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:33.122083    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:33.122095    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:33.137232    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:33.137243    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:33.156150    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:33.156161    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:33.193903    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:33.193913    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:33.208881    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:33.208894    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:32.851685    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:32.851997    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:32.875240    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:32.875353    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:32.892005    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:32.892104    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:32.905957    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:32.906043    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:32.918155    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:32.918236    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:32.929492    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:32.929583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:32.940692    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:32.940775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:32.952017    4045 logs.go:282] 0 containers: []
	W1009 12:48:32.952029    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:32.952101    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:32.963488    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:32.963504    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:32.963510    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:32.968421    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:32.968431    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:33.007553    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:33.007572    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:33.023609    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:33.023624    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:33.056468    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:33.056479    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:33.070090    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:33.070101    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:33.083402    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:33.083417    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:33.099752    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:33.099768    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:33.137959    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:33.137969    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:33.156648    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:33.156658    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:33.176032    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:33.176047    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:33.201811    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:33.201828    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:33.214264    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:33.214277    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:35.728042    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:35.723924    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:40.730030    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:40.730157    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:40.755592    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:40.755675    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:40.766827    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:40.766903    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:40.780382    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:40.780459    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:40.792352    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:40.792427    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:40.803917    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:40.803995    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:40.815229    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:40.815306    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:40.826926    4045 logs.go:282] 0 containers: []
	W1009 12:48:40.826938    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:40.827007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:40.838124    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:40.838141    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:40.838147    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:40.851377    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:40.851388    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:40.867397    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:40.867408    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:40.879559    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:40.879575    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:40.894490    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:40.894507    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:40.910852    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:40.910866    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:40.949957    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:40.949966    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:40.963238    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:40.963250    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:40.976316    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:40.976328    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:40.995685    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:40.995702    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:41.020720    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:41.020734    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:41.033253    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:41.033264    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:41.069253    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:41.069265    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:40.726013    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:40.726273    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:40.743185    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:40.743275    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:40.762844    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:40.762926    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:40.773898    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:40.773979    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:40.787481    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:40.787563    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:40.799424    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:40.799504    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:40.810960    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:40.811039    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:40.822184    4056 logs.go:282] 0 containers: []
	W1009 12:48:40.822195    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:40.822260    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:40.833661    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:40.833676    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:40.833682    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:40.871330    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:40.871341    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:40.884282    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:40.884296    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:40.900492    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:40.900509    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:40.913253    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:40.913265    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:40.931329    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:40.931346    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:40.944132    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:40.944144    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:40.949309    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:40.949321    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:40.988538    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:40.988551    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:41.003175    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:41.003189    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:41.017790    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:41.017801    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:41.030628    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:41.030640    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:41.043294    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:41.043305    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:43.569258    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:43.575867    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:48.571428    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:48.571723    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:48.592857    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:48.592950    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:48.605835    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:48.605920    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:48.617937    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:48.618012    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:48.633591    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:48.633672    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:48.645223    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:48.645302    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:48.657922    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:48.658000    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:48.669675    4056 logs.go:282] 0 containers: []
	W1009 12:48:48.669688    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:48.669749    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:48.681124    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:48.681137    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:48.681142    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:48.697393    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:48.697410    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:48.709621    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:48.709635    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:48.734288    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:48.734301    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:48.749572    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:48.749586    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:48.762480    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:48.762492    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:48.801582    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:48.801595    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:48.816518    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:48.816526    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:48.829240    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:48.829251    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:48.850857    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:48.850874    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:48.864783    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:48.864796    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:48.877373    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:48.877387    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:48.912296    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:48.912308    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:48.577875    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:48.578032    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:48.593835    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:48.593887    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:48.606547    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:48.606591    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:48.618138    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:48.618185    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:48.630484    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:48.630564    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:48.641954    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:48.642036    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:48.654219    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:48.654296    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:48.666236    4045 logs.go:282] 0 containers: []
	W1009 12:48:48.666250    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:48.666319    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:48.678121    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:48.678136    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:48.678141    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:48.692726    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:48.692743    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:48.706167    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:48.706179    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:48.729569    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:48.729580    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:48.742113    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:48.742125    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:48.758428    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:48.758440    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:48.772504    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:48.772516    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:48.808286    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:48.808302    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:48.813289    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:48.813301    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:48.852732    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:48.852741    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:48.869038    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:48.869050    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:48.887922    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:48.887931    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:48.915907    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:48.915919    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:51.429310    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:51.419371    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:56.431479    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:56.431821    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:56.463927    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:56.464055    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:56.482505    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:56.482604    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:56.498361    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:56.498451    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:56.515441    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:56.515508    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:56.527359    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:56.527448    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:56.540763    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:56.540848    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:56.552758    4045 logs.go:282] 0 containers: []
	W1009 12:48:56.552765    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:56.552794    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:56.565270    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:56.565285    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:56.565291    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:56.604294    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:56.604319    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:56.632068    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:56.632084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:56.673119    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:56.673136    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:56.420923    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:56.421431    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:56.458788    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:48:56.458955    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:56.483898    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:48:56.484071    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:56.512857    4056 logs.go:282] 2 containers: [1c97f65809c7 a76162a06587]
	I1009 12:48:56.512939    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:56.534795    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:48:56.534879    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:56.552516    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:48:56.552606    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:56.572324    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:48:56.572406    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:56.608584    4056 logs.go:282] 0 containers: []
	W1009 12:48:56.608598    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:56.608673    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:56.627507    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:48:56.627568    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:56.627616    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:56.720457    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:56.720470    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:56.747857    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:48:56.747875    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:48:56.762722    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:48:56.762734    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:48:56.779927    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:48:56.779938    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:48:56.798301    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:56.798313    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:56.836181    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:56.836202    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:56.841709    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:48:56.841723    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:48:56.856890    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:48:56.856902    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:48:56.872410    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:48:56.872424    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:48:56.885181    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:48:56.885194    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:48:56.904143    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:48:56.904157    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:48:56.917988    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:48:56.918003    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:59.432882    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:56.703236    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:56.703250    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:56.733695    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:56.733707    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:56.768644    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:56.768665    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:56.775676    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:56.775688    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:56.862675    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:56.862688    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:56.878819    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:56.878830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:56.891529    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:56.891543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:56.908379    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:56.908394    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:56.928034    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:56.928047    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:59.442525    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:04.435072    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:04.435439    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:04.464255    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:04.464380    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:04.483503    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:04.483606    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:04.498208    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:04.498301    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:04.510414    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:04.510498    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:04.521206    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:04.521256    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:04.532680    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:04.532763    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:04.544000    4056 logs.go:282] 0 containers: []
	W1009 12:49:04.544014    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:04.544091    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:04.555810    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:04.555828    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:04.555834    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:04.561148    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:04.561158    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:04.576564    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:04.576577    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:04.589337    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:04.589347    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:04.601839    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:04.601852    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:04.614072    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:04.614087    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:04.626716    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:04.626730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:04.639954    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:04.639965    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:04.677791    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:04.677807    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:04.693596    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:04.693604    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:04.714874    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:04.714887    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:04.753499    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:04.753516    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:04.765153    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:04.765168    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:04.781423    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:04.781435    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:04.793801    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:04.793813    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:04.444564    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:04.444758    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:04.471231    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:04.471335    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:04.489151    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:04.489249    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:04.503722    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:04.503815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:04.521173    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:04.521256    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:04.533468    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:04.533512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:04.545283    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:04.545341    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:04.557044    4045 logs.go:282] 0 containers: []
	W1009 12:49:04.557055    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:04.557123    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:04.571142    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:04.571163    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:04.571168    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:04.587599    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:04.587611    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:04.604874    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:04.604885    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:04.624135    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:04.624153    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:04.651737    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:04.651751    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:04.675006    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:04.675018    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:04.680241    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:04.680251    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:04.693516    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:04.693529    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:04.710060    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:04.710074    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:04.722878    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:04.722891    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:04.735448    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:04.735461    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:04.775758    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:04.775771    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:04.791929    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:04.791946    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:04.805379    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:04.805391    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:04.841979    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:04.841988    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:07.322730    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:07.356182    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:12.324981    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:12.325134    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:12.339843    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:12.339934    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:12.352038    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:12.352123    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:12.363254    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:12.363336    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:12.374762    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:12.374833    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:12.387465    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:12.387541    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:12.399558    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:12.399625    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:12.410726    4056 logs.go:282] 0 containers: []
	W1009 12:49:12.410735    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:12.410781    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:12.429774    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:12.429790    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:12.429796    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:12.468756    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:12.468769    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:12.496944    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:12.496956    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:12.509450    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:12.509465    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:12.535691    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:12.535706    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:12.551217    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:12.551232    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:12.564620    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:12.564632    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:12.570119    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:12.570130    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:12.589594    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:12.589605    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:12.601671    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:12.601683    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:12.617825    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:12.617841    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:12.632730    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:12.632743    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:12.670623    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:12.670635    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:12.687667    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:12.687679    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:12.700719    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:12.700731    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:15.216115    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:12.358248    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:12.358340    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:12.370867    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:12.370947    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:12.384449    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:12.384533    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:12.397437    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:12.397520    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:12.409753    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:12.409834    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:12.422267    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:12.422348    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:12.433802    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:12.433883    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:12.445500    4045 logs.go:282] 0 containers: []
	W1009 12:49:12.445511    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:12.445583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:12.458015    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:12.458034    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:12.458039    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:12.475887    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:12.475903    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:12.489684    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:12.489700    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:12.502619    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:12.502631    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:12.520330    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:12.520342    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:12.539083    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:12.539093    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:12.552690    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:12.552698    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:12.566518    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:12.566528    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:12.582208    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:12.582221    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:12.610789    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:12.610804    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:12.651592    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:12.651606    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:12.667713    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:12.667725    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:12.688130    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:12.688140    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:12.701264    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:12.701273    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:12.738281    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:12.738293    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:15.244814    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:20.218374    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:20.218708    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:20.247114    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:20.247200    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:20.266402    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:20.266454    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:20.281376    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:20.281464    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:20.293614    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:20.293700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:20.305617    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:20.305695    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:20.317148    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:20.317229    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:20.328971    4056 logs.go:282] 0 containers: []
	W1009 12:49:20.328982    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:20.329055    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:20.340617    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:20.340637    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:20.340643    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:20.360770    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:20.360782    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:20.375257    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:20.375270    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:20.391586    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:20.391597    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:20.417168    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:20.417180    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:20.246268    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:20.246433    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:20.266174    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:20.266266    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:20.281759    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:20.281811    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:20.295150    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:20.295221    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:20.310871    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:20.310951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:20.323723    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:20.323806    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:20.336768    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:20.336851    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:20.350023    4045 logs.go:282] 0 containers: []
	W1009 12:49:20.350036    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:20.350107    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:20.363039    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:20.363057    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:20.363062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:20.376413    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:20.376423    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:20.413806    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:20.413817    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:20.427470    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:20.427482    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:20.440868    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:20.440880    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:20.479155    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:20.479170    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:20.498501    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:20.498512    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:20.511531    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:20.511543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:20.525135    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:20.525151    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:20.539836    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:20.539849    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:20.553651    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:20.553664    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:20.558228    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:20.558244    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:20.574562    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:20.574569    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:20.591822    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:20.591831    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:20.610976    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:20.610988    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:20.433780    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:20.433796    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:20.446383    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:20.446396    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:20.464249    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:20.464261    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:20.476893    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:20.476908    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:20.514491    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:20.514503    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:20.520125    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:20.520138    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:20.560068    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:20.560079    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:20.573437    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:20.573451    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:20.589011    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:20.589027    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:20.611292    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:20.611301    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:23.125888    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:23.138829    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:28.128120    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:28.128399    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:28.152043    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:28.152158    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:28.168623    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:28.168709    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:28.182033    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:28.182100    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:28.193496    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:28.193543    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:28.204535    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:28.204617    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:28.216674    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:28.216762    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:28.227446    4056 logs.go:282] 0 containers: []
	W1009 12:49:28.227471    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:28.227545    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:28.240321    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:28.240342    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:28.240348    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:28.257912    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:28.257921    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:28.275494    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:28.275504    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:28.287301    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:28.287312    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:28.324099    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:28.324123    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:28.339829    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:28.339841    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:28.352642    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:28.352657    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:28.367735    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:28.367743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:28.380441    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:28.380450    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:28.405616    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:28.405629    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:28.421770    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:28.421783    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:28.426269    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:28.426276    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:28.462759    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:28.462771    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:28.479319    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:28.479331    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:28.499990    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:28.500002    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:28.140954    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:28.141146    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:28.164755    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:28.164850    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:28.179592    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:28.179680    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:28.192281    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:28.192371    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:28.205331    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:28.205378    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:28.217513    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:28.217558    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:28.229605    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:28.229671    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:28.244761    4045 logs.go:282] 0 containers: []
	W1009 12:49:28.244773    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:28.244845    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:28.256502    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:28.256520    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:28.256525    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:28.282377    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:28.282394    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:28.296564    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:28.296575    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:28.301091    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:28.301100    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:28.317201    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:28.317212    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:28.336704    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:28.336717    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:28.365831    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:28.365843    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:28.379132    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:28.379155    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:28.392497    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:28.392510    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:28.407256    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:28.407266    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:28.443788    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:28.443812    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:28.483265    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:28.483277    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:28.501931    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:28.501941    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:28.524283    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:28.524294    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:28.548806    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:28.548817    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:31.062386    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:31.017364    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:36.064473    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:36.064585    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:36.081439    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:36.081518    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:36.093119    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:36.093205    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:36.104728    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:36.104772    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:36.115798    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:36.115870    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:36.127492    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:36.127574    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:36.138750    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:36.138835    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:36.149757    4045 logs.go:282] 0 containers: []
	W1009 12:49:36.149768    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:36.149838    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:36.161173    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:36.161191    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:36.161196    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:36.176572    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:36.176585    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:36.189670    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:36.189682    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:36.205560    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:36.205573    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:36.218287    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:36.218299    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:36.232781    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:36.232792    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:36.270604    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:36.270612    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:36.288773    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:36.288784    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:36.311362    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:36.311374    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:36.324374    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:36.324387    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:36.351036    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:36.351046    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:36.366349    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:36.366360    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:36.379050    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:36.379063    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:36.384052    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:36.384060    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:36.396424    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:36.396441    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:36.019619    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:36.019851    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:36.035894    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:36.035990    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:36.048468    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:36.048548    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:36.059485    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:36.059569    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:36.070577    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:36.070652    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:36.081678    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:36.081713    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:36.093630    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:36.093671    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:36.104227    4056 logs.go:282] 0 containers: []
	W1009 12:49:36.104240    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:36.104313    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:36.115691    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:36.115711    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:36.115718    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:36.153874    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:36.153886    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:36.167355    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:36.167368    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:36.180141    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:36.180154    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:36.193118    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:36.193132    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:36.219406    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:36.219415    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:36.257152    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:36.257178    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:36.270281    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:36.270293    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:36.284675    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:36.284687    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:36.297340    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:36.297353    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:36.313513    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:36.313523    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:36.319826    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:36.319839    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:36.335691    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:36.335706    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:36.348391    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:36.348404    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:36.366910    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:36.366920    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:38.881487    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:38.932053    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:43.878906    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:43.879084    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:43.890828    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:43.890909    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:43.901869    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:43.901946    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:43.913035    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:43.913114    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:43.923335    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:43.923409    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:43.933973    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:43.934054    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:43.945880    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:43.945955    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:43.957790    4056 logs.go:282] 0 containers: []
	W1009 12:49:43.957808    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:43.957903    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:43.969680    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:43.969710    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:43.969717    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:44.009503    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:44.009519    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:44.025656    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:44.025666    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:44.040368    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:44.040379    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:44.066860    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:44.066874    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:44.080026    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:44.080042    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:44.092310    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:44.092326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:44.110392    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:44.110403    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:44.129310    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:44.129326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:44.141854    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:44.141867    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:44.155447    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:44.155460    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:44.160799    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:44.160808    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:44.175185    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:44.175197    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:44.193961    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:44.193977    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:44.231712    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:44.231728    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:43.928913    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:43.929001    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:43.940727    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:43.940807    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:43.951864    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:43.951940    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:43.964971    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:43.965050    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:43.976970    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:43.977049    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:43.988059    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:43.988135    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:43.999369    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:43.999474    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:44.011312    4045 logs.go:282] 0 containers: []
	W1009 12:49:44.011322    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:44.011391    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:44.022799    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:44.022821    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:44.022827    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:44.036583    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:44.036594    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:44.054870    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:44.054881    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:44.095801    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:44.095813    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:44.111378    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:44.111386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:44.124025    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:44.124041    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:44.136429    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:44.136445    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:44.149865    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:44.149877    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:44.186493    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:44.186511    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:44.207327    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:44.207387    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:44.221486    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:44.221502    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:44.242416    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:44.242429    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:44.268294    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:44.268305    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:44.272406    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:44.272412    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:44.284460    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:44.284474    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:46.745112    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:46.796594    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:51.744216    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:51.744390    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:51.758251    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:51.758335    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:51.769199    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:51.769279    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:51.779645    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:51.779718    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:51.790328    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:51.790394    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:51.800981    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:51.801060    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:51.812997    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:51.813082    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:51.823855    4056 logs.go:282] 0 containers: []
	W1009 12:49:51.823867    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:51.823936    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:51.835423    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:51.835442    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:51.835447    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:51.850261    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:51.850271    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:51.868531    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:51.868545    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:51.906023    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:51.906037    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:51.910771    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:51.910783    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:51.928037    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:51.928048    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:51.939973    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:51.939988    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:51.952419    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:51.952431    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:51.990878    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:51.990889    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:52.005382    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:52.005393    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:52.018305    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:52.018319    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:52.034560    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:52.034578    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:52.047388    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:52.047399    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:52.073444    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:52.073453    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:52.085863    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:52.085875    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:54.600178    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:51.795577    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:51.795654    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:51.807207    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:51.807285    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:51.823996    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:51.824039    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:51.835869    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:51.835945    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:51.847378    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:51.847458    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:51.859101    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:51.859181    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:51.870165    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:51.870242    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:51.881046    4045 logs.go:282] 0 containers: []
	W1009 12:49:51.881058    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:51.881129    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:51.892841    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:51.892860    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:51.892865    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:51.905120    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:51.905131    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:51.917874    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:51.917886    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:51.954130    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:51.954142    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:51.959096    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:51.959107    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:51.974966    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:51.974978    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:51.993805    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:51.993813    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:52.009057    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:52.009068    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:52.024146    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:52.024158    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:52.036659    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:52.036670    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:52.052988    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:52.052998    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:52.080122    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:52.080134    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:52.093097    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:52.093109    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:52.105939    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:52.105951    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:52.141872    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:52.141884    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:54.655325    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:59.599729    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:59.600005    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:59.632521    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:49:59.632626    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:59.646925    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:49:59.647005    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:59.658646    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:49:59.658730    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:59.673881    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:49:59.673958    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:59.685150    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:49:59.685232    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:59.696739    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:49:59.696824    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:59.708833    4056 logs.go:282] 0 containers: []
	W1009 12:49:59.708848    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:59.708930    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:59.720732    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:49:59.720775    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:49:59.720784    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:49:59.734128    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:49:59.734139    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:49:59.753693    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:59.753711    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:59.780399    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:59.780410    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:59.818914    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:49:59.818926    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:49:59.832370    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:49:59.832381    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:49:59.845791    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:59.845802    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:59.886813    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:59.886828    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:59.891624    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:49:59.891635    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:49:59.904181    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:49:59.904192    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:49:59.919647    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:49:59.919661    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:49:59.941200    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:49:59.941210    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:49:59.953993    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:49:59.954005    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:49:59.967732    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:49:59.967743    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:49:59.982956    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:49:59.982969    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:59.655708    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:59.655795    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:59.671776    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:59.671867    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:59.687769    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:59.687839    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:59.699146    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:59.699225    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:59.710966    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:59.711041    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:59.722474    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:59.722571    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:59.735713    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:59.735794    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:59.746560    4045 logs.go:282] 0 containers: []
	W1009 12:49:59.746572    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:59.746643    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:59.763235    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:59.763256    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:59.763261    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:59.776644    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:59.776655    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:59.800108    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:59.800121    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:59.826911    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:59.826928    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:59.839397    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:59.839412    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:59.843957    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:59.843968    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:59.859680    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:59.859695    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:59.875230    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:59.875241    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:59.891709    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:59.891717    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:59.904427    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:59.904435    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:59.917573    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:59.917584    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:59.943798    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:59.943811    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:59.956613    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:59.956624    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:59.993440    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:59.993457    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:00.030163    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:00.030176    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:02.496461    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:02.545760    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:07.497876    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:07.498332    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:07.530195    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:07.530344    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:07.549179    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:07.549287    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:07.565041    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:07.565130    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:07.577820    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:07.577898    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:07.589275    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:07.589354    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:07.601895    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:07.601980    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:07.615137    4056 logs.go:282] 0 containers: []
	W1009 12:50:07.615151    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:07.615226    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:07.627101    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:07.627120    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:07.627127    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:07.632965    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:07.632974    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:07.681368    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:07.681381    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:07.698090    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:07.698101    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:07.710875    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:07.710886    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:07.747951    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:07.747969    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:07.762191    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:07.762203    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:07.777638    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:07.777654    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:07.796740    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:07.796756    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:07.814451    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:07.814461    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:07.826983    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:07.826995    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:07.842357    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:07.842370    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:07.855338    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:07.855348    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:07.867861    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:07.867873    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:07.893420    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:07.893440    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:07.546842    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:07.546986    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:07.561992    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:07.562085    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:07.574864    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:07.574946    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:07.586465    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:07.586558    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:07.597693    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:07.597774    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:07.609369    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:07.609451    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:07.621154    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:07.621233    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:07.632679    4045 logs.go:282] 0 containers: []
	W1009 12:50:07.632692    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:07.632762    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:07.644514    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:07.644533    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:07.644538    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:07.662717    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:07.662734    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:07.702880    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:07.702894    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:07.719158    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:07.719173    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:07.744406    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:07.744415    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:07.757565    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:07.757577    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:07.769832    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:07.769844    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:07.783192    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:07.783204    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:07.797073    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:07.797084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:07.809839    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:07.809851    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:07.814639    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:07.814646    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:07.852482    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:07.852497    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:07.868927    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:07.868940    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:07.884235    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:07.884246    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:07.901882    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:07.901893    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:10.416929    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:10.406978    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:15.418557    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:15.418755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:15.443741    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:15.443850    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:15.461338    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:15.461435    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:15.476512    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:15.476601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:15.490210    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:15.490285    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:15.504040    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:15.504121    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:15.515587    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:15.515667    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:15.531124    4045 logs.go:282] 0 containers: []
	W1009 12:50:15.531137    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:15.531208    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:15.542688    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:15.542710    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:15.542716    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:15.555917    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:15.555930    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:15.581735    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:15.581750    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:15.605312    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:15.605321    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:15.621819    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:15.621839    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:15.645244    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:15.645255    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:15.683228    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:15.683241    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:15.730080    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:15.730094    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:15.747985    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:15.747999    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:15.760348    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:15.760361    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:15.764960    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:15.764971    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:15.780414    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:15.780425    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:15.793165    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:15.793178    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:15.808261    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:15.808272    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:15.820348    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:15.820359    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:15.408326    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:15.408650    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:15.437248    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:15.437365    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:15.454944    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:15.455042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:15.469628    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:15.469714    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:15.487332    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:15.487411    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:15.498641    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:15.498725    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:15.510392    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:15.510467    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:15.521062    4056 logs.go:282] 0 containers: []
	W1009 12:50:15.521077    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:15.521142    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:15.532138    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:15.532152    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:15.532157    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:15.545407    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:15.545418    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:15.558059    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:15.558068    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:15.570439    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:15.570452    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:15.582915    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:15.582924    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:15.603215    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:15.603227    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:15.609142    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:15.609155    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:15.650796    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:15.650809    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:15.670918    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:15.670933    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:15.685895    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:15.685904    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:15.699020    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:15.699034    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:15.712001    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:15.712014    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:15.753648    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:15.753663    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:15.766317    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:15.766326    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:15.787263    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:15.787276    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:18.314924    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:18.333947    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:23.316783    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:23.317014    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:23.336328    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:23.336378    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:23.352817    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:23.352920    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:23.364292    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:23.364374    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:23.376035    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:23.376106    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:23.387007    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:23.387084    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:23.398048    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:23.398124    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:23.409492    4056 logs.go:282] 0 containers: []
	W1009 12:50:23.409503    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:23.409571    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:23.424349    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:23.424362    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:23.424368    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:23.437185    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:23.437195    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:23.475157    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:23.475171    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:23.491020    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:23.491036    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:23.503816    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:23.503831    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:23.516696    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:23.516710    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:23.530511    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:23.530522    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:23.545843    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:23.545852    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:23.550819    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:23.550830    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:23.590088    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:23.590101    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:23.605731    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:23.605750    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:23.618926    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:23.618938    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:23.639830    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:23.639842    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:23.652896    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:23.652908    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:23.677677    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:23.677687    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:23.335611    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:23.335731    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:23.350065    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:23.350149    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:23.361665    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:23.361740    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:23.377492    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:23.377552    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:23.388752    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:23.388815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:23.400141    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:23.400213    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:23.411694    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:23.411758    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:23.422359    4045 logs.go:282] 0 containers: []
	W1009 12:50:23.422369    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:23.422435    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:23.435171    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:23.435188    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:23.435194    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:23.447919    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:23.447931    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:23.467627    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:23.467638    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:23.480227    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:23.480243    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:23.495028    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:23.495040    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:23.511729    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:23.511746    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:23.524589    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:23.524602    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:23.543056    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:23.543068    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:23.581492    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:23.581505    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:23.594376    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:23.594389    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:23.607120    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:23.607127    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:23.631149    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:23.631167    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:23.658522    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:23.658535    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:23.674414    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:23.674425    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:23.711682    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:23.711693    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:26.218199    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:26.191916    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:31.218863    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:31.218922    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:31.230994    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:31.231039    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:31.246888    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:31.246974    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:31.258595    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:31.258675    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:31.269871    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:31.269949    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:31.281049    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:31.281103    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:31.293163    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:31.293239    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:31.304513    4045 logs.go:282] 0 containers: []
	W1009 12:50:31.304524    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:31.304593    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:31.320270    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:31.320288    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:31.320292    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:31.356772    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:31.356780    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:31.369270    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:31.369279    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:31.387907    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:31.387922    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:31.414347    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:31.414365    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:31.452800    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:31.452819    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:31.465261    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:31.465275    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:31.481697    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:31.481716    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:31.494629    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:31.494640    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:31.510749    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:31.510766    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:31.523787    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:31.523800    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:31.528866    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:31.528876    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:31.543987    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:31.544000    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:31.557173    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:31.557187    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:31.571185    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:31.571199    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:31.193866    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:31.194094    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:31.207773    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:31.207865    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:31.218716    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:31.218801    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:31.230591    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:31.230680    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:31.241611    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:31.241700    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:31.252927    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:31.253007    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:31.265037    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:31.265116    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:31.279934    4056 logs.go:282] 0 containers: []
	W1009 12:50:31.279947    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:31.280018    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:31.292039    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:31.292057    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:31.292063    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:31.329960    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:31.329978    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:31.355384    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:31.355397    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:31.368716    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:31.368730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:31.406668    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:31.406681    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:31.418852    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:31.418863    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:31.431418    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:31.431429    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:31.458798    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:31.458811    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:31.475217    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:31.475228    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:31.490485    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:31.490502    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:31.503247    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:31.503262    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:31.519009    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:31.519023    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:31.531914    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:31.531929    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:31.544149    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:31.544159    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:31.549214    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:31.549226    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:34.064126    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:34.085057    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:39.064959    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:39.065259    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:39.092052    4056 logs.go:282] 1 containers: [3cc33d6b32f0]
	I1009 12:50:39.092164    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:39.110007    4056 logs.go:282] 1 containers: [44b77db9eae1]
	I1009 12:50:39.110102    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:39.123292    4056 logs.go:282] 4 containers: [712b3bb4a13d 65c0edae9732 1c97f65809c7 a76162a06587]
	I1009 12:50:39.123378    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:39.134856    4056 logs.go:282] 1 containers: [448b5e0fc3ed]
	I1009 12:50:39.134941    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:39.146712    4056 logs.go:282] 1 containers: [dc50e38b6f1e]
	I1009 12:50:39.146789    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:39.158371    4056 logs.go:282] 1 containers: [6b6a7a5abf66]
	I1009 12:50:39.158455    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:39.169957    4056 logs.go:282] 0 containers: []
	W1009 12:50:39.169971    4056 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:39.170042    4056 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:39.181857    4056 logs.go:282] 1 containers: [0b5bfdb6be10]
	I1009 12:50:39.181874    4056 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:39.181881    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:39.207320    4056 logs.go:123] Gathering logs for coredns [712b3bb4a13d] ...
	I1009 12:50:39.207340    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712b3bb4a13d"
	I1009 12:50:39.220728    4056 logs.go:123] Gathering logs for coredns [a76162a06587] ...
	I1009 12:50:39.220741    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76162a06587"
	I1009 12:50:39.234635    4056 logs.go:123] Gathering logs for storage-provisioner [0b5bfdb6be10] ...
	I1009 12:50:39.234647    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5bfdb6be10"
	I1009 12:50:39.246525    4056 logs.go:123] Gathering logs for kube-apiserver [3cc33d6b32f0] ...
	I1009 12:50:39.246533    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc33d6b32f0"
	I1009 12:50:39.262299    4056 logs.go:123] Gathering logs for etcd [44b77db9eae1] ...
	I1009 12:50:39.262314    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b77db9eae1"
	I1009 12:50:39.277652    4056 logs.go:123] Gathering logs for coredns [65c0edae9732] ...
	I1009 12:50:39.277668    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c0edae9732"
	I1009 12:50:39.291401    4056 logs.go:123] Gathering logs for coredns [1c97f65809c7] ...
	I1009 12:50:39.291413    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c97f65809c7"
	I1009 12:50:39.305072    4056 logs.go:123] Gathering logs for kube-scheduler [448b5e0fc3ed] ...
	I1009 12:50:39.305085    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 448b5e0fc3ed"
	I1009 12:50:39.321675    4056 logs.go:123] Gathering logs for kube-controller-manager [6b6a7a5abf66] ...
	I1009 12:50:39.321688    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b6a7a5abf66"
	I1009 12:50:39.340764    4056 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:39.340783    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:39.379051    4056 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:39.379063    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:39.385805    4056 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:39.385816    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:39.427736    4056 logs.go:123] Gathering logs for kube-proxy [dc50e38b6f1e] ...
	I1009 12:50:39.427749    4056 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc50e38b6f1e"
	I1009 12:50:39.440717    4056 logs.go:123] Gathering logs for container status ...
	I1009 12:50:39.440730    4056 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:39.086955    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:39.087158    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:39.110623    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:39.110685    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:39.124113    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:39.124166    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:39.136360    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:39.136415    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:39.148153    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:39.148203    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:39.159452    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:39.159497    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:39.175363    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:39.175433    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:39.187123    4045 logs.go:282] 0 containers: []
	W1009 12:50:39.187134    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:39.187201    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:39.198919    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:39.198938    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:39.198944    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:39.213492    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:39.213504    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:39.226757    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:39.226770    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:39.245573    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:39.245590    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:39.258429    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:39.258443    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:39.298955    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:39.298968    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:39.314452    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:39.314464    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:39.326866    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:39.326878    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:39.331712    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:39.331723    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:39.350387    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:39.350399    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:39.363438    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:39.363449    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:39.376124    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:39.376136    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:39.414038    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:39.414063    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:39.428054    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:39.428062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:39.440846    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:39.440856    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:41.955145    4056 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:41.966948    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:46.957143    4056 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:46.961573    4056 out.go:201] 
	W1009 12:50:46.965446    4056 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1009 12:50:46.965459    4056 out.go:270] * 
	W1009 12:50:46.966109    4056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:50:46.968846    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:46.978547    4056 out.go:201] 
	I1009 12:50:46.988503    4045 out.go:201] 
	W1009 12:50:46.997504    4045 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1009 12:50:46.997519    4045 out.go:270] * 
	W1009 12:50:46.998209    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:50:47.012515    4045 out.go:201] 
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-10-09 19:41:42 UTC, ends at Wed 2024-10-09 19:51:03 UTC. --
	Oct 09 19:50:47 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:47Z" level=error msg="ContainerStats resp: {0x40009cd7c0 linux}"
	Oct 09 19:50:47 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 09 19:50:47 running-upgrade-763000 dockerd[3503]: time="2024-10-09T19:50:47.122112377Z" level=error msg="Failed to compute size of container rootfs a76162a065874ac6bd96c51f100767ae50e9b54f75ff34575fbb536d13c61ac2: mount does not exist"
	Oct 09 19:50:47 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:47Z" level=error msg="Error: No such container: a76162a065874ac6bd96c51f100767ae50e9b54f75ff34575fbb536d13c61ac2 Failed to get stats from container a76162a065874ac6bd96c51f100767ae50e9b54f75ff34575fbb536d13c61ac2"
	Oct 09 19:50:48 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:48Z" level=error msg="ContainerStats resp: {0x40005df100 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x4000356280 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x40005de040 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x40005de880 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x40005dee80 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x40005df2c0 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x40005df700 linux}"
	Oct 09 19:50:49 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:49Z" level=error msg="ContainerStats resp: {0x4000768000 linux}"
	Oct 09 19:50:52 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 09 19:50:57 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 09 19:50:59 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:59Z" level=error msg="ContainerStats resp: {0x4000943140 linux}"
	Oct 09 19:50:59 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:50:59Z" level=error msg="ContainerStats resp: {0x400063e800 linux}"
	Oct 09 19:51:00 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:00Z" level=error msg="ContainerStats resp: {0x40008de700 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x40008df0c0 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x4000357580 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x40008df940 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x40008dfe00 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x4000406600 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x40005de940 linux}"
	Oct 09 19:51:01 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:01Z" level=error msg="ContainerStats resp: {0x40005ded80 linux}"
	Oct 09 19:51:02 running-upgrade-763000 cri-dockerd[3353]: time="2024-10-09T19:51:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	68881c088a704       edaa71f2aee88       17 seconds ago      Running             coredns                   2                   5067752e89088
	709424c8c50ea       edaa71f2aee88       17 seconds ago      Running             coredns                   2                   22a503a791b48
	712b3bb4a13d0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   22a503a791b48
	65c0edae97321       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5067752e89088
	dc50e38b6f1ec       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   f2cbc7c8cd7af
	0b5bfdb6be100       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   6f3b2d38d13ae
	6b6a7a5abf665       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8eb6b5528b460
	448b5e0fc3ed0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   0a18e25d23261
	44b77db9eae11       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   3cf64c886a7c3
	3cc33d6b32f04       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   3d4bbc9ff56fd
	
	
	==> coredns [65c0edae9732] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:34141->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:46140->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:54315->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:59706->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:55049->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:56990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:55140->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:56961->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:47921->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2617618290782472541.804657820278236751. HINFO: read udp 10.244.0.3:57834->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68881c088a70] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 57506537263045156.930360586450965029. HINFO: read udp 10.244.0.3:40190->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 57506537263045156.930360586450965029. HINFO: read udp 10.244.0.3:54069->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 57506537263045156.930360586450965029. HINFO: read udp 10.244.0.3:34352->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 57506537263045156.930360586450965029. HINFO: read udp 10.244.0.3:41266->10.0.2.3:53: i/o timeout
	
	
	==> coredns [709424c8c50e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6320366543137055960.9206699369737509270. HINFO: read udp 10.244.0.2:43443->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6320366543137055960.9206699369737509270. HINFO: read udp 10.244.0.2:36949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6320366543137055960.9206699369737509270. HINFO: read udp 10.244.0.2:35759->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6320366543137055960.9206699369737509270. HINFO: read udp 10.244.0.2:44293->10.0.2.3:53: i/o timeout
	
	
	==> coredns [712b3bb4a13d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:49123->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:53203->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:37685->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:48710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:35166->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:47802->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:35066->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:46249->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:51361->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7055002235070361752.4380503141426140145. HINFO: read udp 10.244.0.2:56337->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-763000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-763000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=running-upgrade-763000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T12_46_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:46:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-763000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:46:45 +0000   Wed, 09 Oct 2024 19:46:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:46:45 +0000   Wed, 09 Oct 2024 19:46:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:46:45 +0000   Wed, 09 Oct 2024 19:46:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:46:45 +0000   Wed, 09 Oct 2024 19:46:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-763000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 657d1316f1234dd5b1b621a0496ba97f
	  System UUID:                657d1316f1234dd5b1b621a0496ba97f
	  Boot ID:                    8e44ec39-e00f-479c-8b5e-7df30bc41f14
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c2jlx                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 coredns-6d4b75cb6d-gslcs                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 etcd-running-upgrade-763000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-763000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-763000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-rjp7q                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-763000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m4s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-763000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-763000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-763000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m6s   node-controller  Node running-upgrade-763000 event: Registered Node running-upgrade-763000 in Controller
	
	
	==> dmesg <==
	[  +0.076166] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.076376] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.137429] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088536] systemd-fstab-generator[1043]: Ignoring "noauto" for root device
	[  +0.077313] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
	[  +2.316160] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[Oct 9 19:42] systemd-fstab-generator[1929]: Ignoring "noauto" for root device
	[ +14.227095] systemd-fstab-generator[2373]: Ignoring "noauto" for root device
	[  +0.033384] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.222092] systemd-fstab-generator[2521]: Ignoring "noauto" for root device
	[  +0.123644] systemd-fstab-generator[2567]: Ignoring "noauto" for root device
	[  +0.105856] systemd-fstab-generator[2591]: Ignoring "noauto" for root device
	[  +2.393882] systemd-fstab-generator[3310]: Ignoring "noauto" for root device
	[  +0.097682] systemd-fstab-generator[3321]: Ignoring "noauto" for root device
	[  +0.087809] systemd-fstab-generator[3332]: Ignoring "noauto" for root device
	[  +0.105023] systemd-fstab-generator[3346]: Ignoring "noauto" for root device
	[  +2.290034] systemd-fstab-generator[3496]: Ignoring "noauto" for root device
	[  +0.514006] kauditd_printk_skb: 47 callbacks suppressed
	[  +2.997217] systemd-fstab-generator[3856]: Ignoring "noauto" for root device
	[  +1.501972] systemd-fstab-generator[4195]: Ignoring "noauto" for root device
	[  +4.973501] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.404449] kauditd_printk_skb: 3 callbacks suppressed
	[Oct 9 19:46] systemd-fstab-generator[12426]: Ignoring "noauto" for root device
	[  +5.637381] systemd-fstab-generator[13023]: Ignoring "noauto" for root device
	[  +0.476425] systemd-fstab-generator[13155]: Ignoring "noauto" for root device
	
	
	==> etcd [44b77db9eae1] <==
	{"level":"info","ts":"2024-10-09T19:46:40.745Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T19:46:40.746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-09T19:46:41.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-09T19:46:41.043Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-763000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T19:46:41.043Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T19:46:41.043Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-09T19:46:41.046Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T19:46:41.046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T19:46:41.046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T19:46:41.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T19:46:41.063Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T19:46:41.074Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T19:46:41.074Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T19:46:41.074Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:51:03 up 9 min,  0 users,  load average: 0.40, 0.38, 0.21
	Linux running-upgrade-763000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3cc33d6b32f0] <==
	I1009 19:46:42.572055       1 controller.go:611] quota admission added evaluator for: namespaces
	I1009 19:46:42.606535       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1009 19:46:42.606656       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:46:42.607771       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1009 19:46:42.607793       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:46:42.607824       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1009 19:46:42.618323       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:46:42.634352       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1009 19:46:43.323709       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1009 19:46:43.484754       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 19:46:43.488258       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:46:43.488347       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:46:43.645820       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:46:43.657491       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:46:43.676597       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1009 19:46:43.679235       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1009 19:46:43.679680       1 controller.go:611] quota admission added evaluator for: endpoints
	I1009 19:46:43.681234       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:46:44.640088       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1009 19:46:45.313184       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1009 19:46:45.324759       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1009 19:46:45.331825       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1009 19:46:58.344540       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1009 19:46:58.395046       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1009 19:46:58.880327       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6b6a7a5abf66] <==
	W1009 19:46:57.643461       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-763000. Assuming now as a timestamp.
	I1009 19:46:57.643475       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1009 19:46:57.643589       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1009 19:46:57.643593       1 event.go:294] "Event occurred" object="running-upgrade-763000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-763000 event: Registered Node running-upgrade-763000 in Controller"
	I1009 19:46:57.645531       1 shared_informer.go:262] Caches are synced for ephemeral
	I1009 19:46:57.647203       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1009 19:46:57.654941       1 shared_informer.go:262] Caches are synced for PVC protection
	I1009 19:46:57.664912       1 shared_informer.go:262] Caches are synced for resource quota
	I1009 19:46:57.664913       1 shared_informer.go:262] Caches are synced for daemon sets
	I1009 19:46:57.672110       1 shared_informer.go:262] Caches are synced for job
	I1009 19:46:57.673230       1 shared_informer.go:262] Caches are synced for cronjob
	I1009 19:46:57.674344       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1009 19:46:57.676534       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1009 19:46:57.690929       1 shared_informer.go:262] Caches are synced for HPA
	I1009 19:46:57.692311       1 shared_informer.go:262] Caches are synced for stateful set
	I1009 19:46:57.693426       1 shared_informer.go:262] Caches are synced for deployment
	I1009 19:46:57.696646       1 shared_informer.go:262] Caches are synced for attach detach
	I1009 19:46:57.698940       1 shared_informer.go:262] Caches are synced for resource quota
	I1009 19:46:58.112564       1 shared_informer.go:262] Caches are synced for garbage collector
	I1009 19:46:58.159997       1 shared_informer.go:262] Caches are synced for garbage collector
	I1009 19:46:58.160051       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1009 19:46:58.347038       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rjp7q"
	I1009 19:46:58.396146       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1009 19:46:58.495283       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gslcs"
	I1009 19:46:58.503910       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-c2jlx"
	
	
	==> kube-proxy [dc50e38b6f1e] <==
	I1009 19:46:58.867708       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1009 19:46:58.867736       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1009 19:46:58.867747       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1009 19:46:58.878233       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1009 19:46:58.878291       1 server_others.go:206] "Using iptables Proxier"
	I1009 19:46:58.878308       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1009 19:46:58.878425       1 server.go:661] "Version info" version="v1.24.1"
	I1009 19:46:58.878434       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:46:58.878813       1 config.go:317] "Starting service config controller"
	I1009 19:46:58.878825       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1009 19:46:58.878832       1 config.go:226] "Starting endpoint slice config controller"
	I1009 19:46:58.878834       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1009 19:46:58.879085       1 config.go:444] "Starting node config controller"
	I1009 19:46:58.879108       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1009 19:46:58.979700       1 shared_informer.go:262] Caches are synced for node config
	I1009 19:46:58.979714       1 shared_informer.go:262] Caches are synced for service config
	I1009 19:46:58.979725       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [448b5e0fc3ed] <==
	W1009 19:46:42.568810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:46:42.568813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1009 19:46:42.568832       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 19:46:42.568836       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1009 19:46:42.568855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:46:42.568862       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1009 19:46:42.568882       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 19:46:42.568889       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1009 19:46:42.568903       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 19:46:42.568906       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1009 19:46:42.568919       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:46:42.568922       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1009 19:46:42.568938       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 19:46:42.568941       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1009 19:46:42.568954       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:46:42.568961       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 19:46:42.568972       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 19:46:42.568976       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1009 19:46:43.416751       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:46:43.417294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1009 19:46:43.496736       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:46:43.496770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1009 19:46:43.529020       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:46:43.529119       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1009 19:46:43.858338       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-10-09 19:41:42 UTC, ends at Wed 2024-10-09 19:51:03 UTC. --
	Oct 09 19:46:47 running-upgrade-763000 kubelet[13029]: E1009 19:46:47.351341   13029 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-763000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-763000"
	Oct 09 19:46:47 running-upgrade-763000 kubelet[13029]: I1009 19:46:47.548620   13029 request.go:601] Waited for 1.111929064s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Oct 09 19:46:47 running-upgrade-763000 kubelet[13029]: E1009 19:46:47.551775   13029 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-763000\" already exists" pod="kube-system/etcd-running-upgrade-763000"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: I1009 19:46:57.472319   13029 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: I1009 19:46:57.472655   13029 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: I1009 19:46:57.648817   13029 topology_manager.go:200] "Topology Admit Handler"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: I1009 19:46:57.674010   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpvlb\" (UniqueName: \"kubernetes.io/projected/8393f694-0b95-4864-b54e-618575473747-kube-api-access-mpvlb\") pod \"storage-provisioner\" (UID: \"8393f694-0b95-4864-b54e-618575473747\") " pod="kube-system/storage-provisioner"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: I1009 19:46:57.674172   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8393f694-0b95-4864-b54e-618575473747-tmp\") pod \"storage-provisioner\" (UID: \"8393f694-0b95-4864-b54e-618575473747\") " pod="kube-system/storage-provisioner"
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: E1009 19:46:57.778667   13029 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: E1009 19:46:57.778688   13029 projected.go:192] Error preparing data for projected volume kube-api-access-mpvlb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 09 19:46:57 running-upgrade-763000 kubelet[13029]: E1009 19:46:57.778751   13029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8393f694-0b95-4864-b54e-618575473747-kube-api-access-mpvlb podName:8393f694-0b95-4864-b54e-618575473747 nodeName:}" failed. No retries permitted until 2024-10-09 19:46:58.278711254 +0000 UTC m=+12.975849758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mpvlb" (UniqueName: "kubernetes.io/projected/8393f694-0b95-4864-b54e-618575473747-kube-api-access-mpvlb") pod "storage-provisioner" (UID: "8393f694-0b95-4864-b54e-618575473747") : configmap "kube-root-ca.crt" not found
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.348346   13029 topology_manager.go:200] "Topology Admit Handler"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.382724   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6059f6d6-7e78-45b0-8097-c69b4cb13571-xtables-lock\") pod \"kube-proxy-rjp7q\" (UID: \"6059f6d6-7e78-45b0-8097-c69b4cb13571\") " pod="kube-system/kube-proxy-rjp7q"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.382784   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6059f6d6-7e78-45b0-8097-c69b4cb13571-kube-proxy\") pod \"kube-proxy-rjp7q\" (UID: \"6059f6d6-7e78-45b0-8097-c69b4cb13571\") " pod="kube-system/kube-proxy-rjp7q"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.382795   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6059f6d6-7e78-45b0-8097-c69b4cb13571-lib-modules\") pod \"kube-proxy-rjp7q\" (UID: \"6059f6d6-7e78-45b0-8097-c69b4cb13571\") " pod="kube-system/kube-proxy-rjp7q"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.382806   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshhl\" (UniqueName: \"kubernetes.io/projected/6059f6d6-7e78-45b0-8097-c69b4cb13571-kube-api-access-tshhl\") pod \"kube-proxy-rjp7q\" (UID: \"6059f6d6-7e78-45b0-8097-c69b4cb13571\") " pod="kube-system/kube-proxy-rjp7q"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.502579   13029 topology_manager.go:200] "Topology Admit Handler"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.512020   13029 topology_manager.go:200] "Topology Admit Handler"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.584160   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62969c32-17c3-456d-a34c-28c562a80b0a-config-volume\") pod \"coredns-6d4b75cb6d-gslcs\" (UID: \"62969c32-17c3-456d-a34c-28c562a80b0a\") " pod="kube-system/coredns-6d4b75cb6d-gslcs"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.584184   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdq2\" (UniqueName: \"kubernetes.io/projected/62969c32-17c3-456d-a34c-28c562a80b0a-kube-api-access-7vdq2\") pod \"coredns-6d4b75cb6d-gslcs\" (UID: \"62969c32-17c3-456d-a34c-28c562a80b0a\") " pod="kube-system/coredns-6d4b75cb6d-gslcs"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.584196   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cdd42b-3167-4e85-9852-02452bd42efb-config-volume\") pod \"coredns-6d4b75cb6d-c2jlx\" (UID: \"77cdd42b-3167-4e85-9852-02452bd42efb\") " pod="kube-system/coredns-6d4b75cb6d-c2jlx"
	Oct 09 19:46:58 running-upgrade-763000 kubelet[13029]: I1009 19:46:58.584207   13029 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4nc\" (UniqueName: \"kubernetes.io/projected/77cdd42b-3167-4e85-9852-02452bd42efb-kube-api-access-5v4nc\") pod \"coredns-6d4b75cb6d-c2jlx\" (UID: \"77cdd42b-3167-4e85-9852-02452bd42efb\") " pod="kube-system/coredns-6d4b75cb6d-c2jlx"
	Oct 09 19:46:59 running-upgrade-763000 kubelet[13029]: I1009 19:46:59.582086   13029 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5067752e89088bb8caf676435c37379a251fadfbe935bd4272b3aee60ee46495"
	Oct 09 19:50:47 running-upgrade-763000 kubelet[13029]: I1009 19:50:47.045786   13029 scope.go:110] "RemoveContainer" containerID="1c97f65809c7e4714a7b504a0e07b1aac542b6d0f06f630430a5a7c8b25ef417"
	Oct 09 19:50:47 running-upgrade-763000 kubelet[13029]: I1009 19:50:47.067498   13029 scope.go:110] "RemoveContainer" containerID="a76162a065874ac6bd96c51f100767ae50e9b54f75ff34575fbb536d13c61ac2"
	
	
	==> storage-provisioner [0b5bfdb6be10] <==
	I1009 19:46:58.788053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:46:58.792697       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:46:58.792716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 19:46:58.796246       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:46:58.796302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-763000_3491324d-24ff-4e18-8c4f-c92390bdc147!
	I1009 19:46:58.796595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3efe8e8f-3e8e-4a5d-a14a-a175db9b7d65", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-763000_3491324d-24ff-4e18-8c4f-c92390bdc147 became leader
	I1009 19:46:58.897191       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-763000_3491324d-24ff-4e18-8c4f-c92390bdc147!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-763000 -n running-upgrade-763000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-763000 -n running-upgrade-763000: exit status 2 (15.707794333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-763000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-763000
--- FAIL: TestRunningBinaryUpgrade (626.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.84608575s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-134000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-134000" primary control-plane node in "kubernetes-upgrade-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:40:36.559962    3960 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:40:36.560137    3960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:36.560140    3960 out.go:358] Setting ErrFile to fd 2...
	I1009 12:40:36.560142    3960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:36.560276    3960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:40:36.561560    3960 out.go:352] Setting JSON to false
	I1009 12:40:36.579793    3960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4206,"bootTime":1728498630,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:40:36.579861    3960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:40:36.584106    3960 out.go:177] * [kubernetes-upgrade-134000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:40:36.598266    3960 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:40:36.598290    3960 notify.go:220] Checking for updates...
	I1009 12:40:36.607228    3960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:40:36.611202    3960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:40:36.614129    3960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:40:36.617167    3960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:40:36.620203    3960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:40:36.621988    3960 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:40:36.622071    3960 config.go:182] Loaded profile config "offline-docker-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:40:36.622117    3960 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:40:36.626152    3960 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:40:36.633000    3960 start.go:297] selected driver: qemu2
	I1009 12:40:36.633010    3960 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:40:36.633017    3960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:40:36.635639    3960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:40:36.639201    3960 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:40:36.642262    3960 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 12:40:36.642282    3960 cni.go:84] Creating CNI manager for ""
	I1009 12:40:36.642310    3960 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 12:40:36.642338    3960 start.go:340] cluster config:
	{Name:kubernetes-upgrade-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:40:36.647326    3960 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:36.655142    3960 out.go:177] * Starting "kubernetes-upgrade-134000" primary control-plane node in "kubernetes-upgrade-134000" cluster
	I1009 12:40:36.659200    3960 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 12:40:36.659219    3960 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 12:40:36.659234    3960 cache.go:56] Caching tarball of preloaded images
	I1009 12:40:36.659341    3960 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:40:36.659347    3960 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1009 12:40:36.659410    3960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kubernetes-upgrade-134000/config.json ...
	I1009 12:40:36.659421    3960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kubernetes-upgrade-134000/config.json: {Name:mk787f340c7c5759cc8f324da4f11a1f1c580222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:40:36.659883    3960 start.go:360] acquireMachinesLock for kubernetes-upgrade-134000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:36.659938    3960 start.go:364] duration metric: took 45.834µs to acquireMachinesLock for "kubernetes-upgrade-134000"
	I1009 12:40:36.659949    3960 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:36.659977    3960 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:36.667162    3960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:40:36.685780    3960 start.go:159] libmachine.API.Create for "kubernetes-upgrade-134000" (driver="qemu2")
	I1009 12:40:36.685819    3960 client.go:168] LocalClient.Create starting
	I1009 12:40:36.685913    3960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:36.685957    3960 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:36.685974    3960 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:36.686019    3960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:36.686065    3960 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:36.686073    3960 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:36.686537    3960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:36.837842    3960 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:36.906689    3960 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:36.906696    3960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:36.906886    3960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:36.916677    3960 main.go:141] libmachine: STDOUT: 
	I1009 12:40:36.916698    3960 main.go:141] libmachine: STDERR: 
	I1009 12:40:36.916756    3960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2 +20000M
	I1009 12:40:36.925363    3960 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:36.925379    3960 main.go:141] libmachine: STDERR: 
	I1009 12:40:36.925402    3960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:36.925407    3960 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:36.925421    3960 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:36.925452    3960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:86:9b:9b:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:36.927267    3960 main.go:141] libmachine: STDOUT: 
	I1009 12:40:36.927282    3960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:36.927303    3960 client.go:171] duration metric: took 241.4765ms to LocalClient.Create
	I1009 12:40:38.929414    3960 start.go:128] duration metric: took 2.269483333s to createHost
	I1009 12:40:38.929470    3960 start.go:83] releasing machines lock for "kubernetes-upgrade-134000", held for 2.269586542s
	W1009 12:40:38.929554    3960 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:38.943907    3960 out.go:177] * Deleting "kubernetes-upgrade-134000" in qemu2 ...
	W1009 12:40:38.968866    3960 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:38.968892    3960 start.go:729] Will try again in 5 seconds ...
	I1009 12:40:43.969471    3960 start.go:360] acquireMachinesLock for kubernetes-upgrade-134000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:43.969631    3960 start.go:364] duration metric: took 124.084µs to acquireMachinesLock for "kubernetes-upgrade-134000"
	I1009 12:40:43.969671    3960 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:40:43.969741    3960 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:40:43.977965    3960 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:40:43.996724    3960 start.go:159] libmachine.API.Create for "kubernetes-upgrade-134000" (driver="qemu2")
	I1009 12:40:43.996770    3960 client.go:168] LocalClient.Create starting
	I1009 12:40:43.996843    3960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:40:43.996881    3960 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:43.996891    3960 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:43.996925    3960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:40:43.996943    3960 main.go:141] libmachine: Decoding PEM data...
	I1009 12:40:43.996950    3960 main.go:141] libmachine: Parsing certificate...
	I1009 12:40:43.997323    3960 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:40:44.179227    3960 main.go:141] libmachine: Creating SSH key...
	I1009 12:40:44.323835    3960 main.go:141] libmachine: Creating Disk image...
	I1009 12:40:44.323843    3960 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:40:44.324052    3960 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:44.334496    3960 main.go:141] libmachine: STDOUT: 
	I1009 12:40:44.334518    3960 main.go:141] libmachine: STDERR: 
	I1009 12:40:44.334574    3960 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2 +20000M
	I1009 12:40:44.343196    3960 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:40:44.343210    3960 main.go:141] libmachine: STDERR: 
	I1009 12:40:44.343223    3960 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:44.343235    3960 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:40:44.343244    3960 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:44.343270    3960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:45:c3:8f:16:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:44.345048    3960 main.go:141] libmachine: STDOUT: 
	I1009 12:40:44.345060    3960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:44.345072    3960 client.go:171] duration metric: took 348.307875ms to LocalClient.Create
	I1009 12:40:46.347171    3960 start.go:128] duration metric: took 2.377481333s to createHost
	I1009 12:40:46.347214    3960 start.go:83] releasing machines lock for "kubernetes-upgrade-134000", held for 2.377639291s
	W1009 12:40:46.347380    3960 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:46.351823    3960 out.go:201] 
	W1009 12:40:46.355620    3960 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:40:46.355629    3960 out.go:270] * 
	* 
	W1009 12:40:46.356173    3960 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:40:46.366704    3960 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-134000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-134000: (1.962516541s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-134000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-134000 status --format={{.Host}}: exit status 7 (72.817666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.214988834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-134000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-134000" primary control-plane node in "kubernetes-upgrade-134000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:40:48.447015    4005 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:40:48.447154    4005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:48.447158    4005 out.go:358] Setting ErrFile to fd 2...
	I1009 12:40:48.447161    4005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:40:48.447267    4005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:40:48.448346    4005 out.go:352] Setting JSON to false
	I1009 12:40:48.466322    4005 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4218,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:40:48.466401    4005 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:40:48.471134    4005 out.go:177] * [kubernetes-upgrade-134000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:40:48.479059    4005 notify.go:220] Checking for updates...
	I1009 12:40:48.481886    4005 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:40:48.488966    4005 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:40:48.496968    4005 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:40:48.504963    4005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:40:48.512025    4005 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:40:48.518928    4005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:40:48.523279    4005 config.go:182] Loaded profile config "kubernetes-upgrade-134000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1009 12:40:48.523555    4005 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:40:48.527987    4005 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:40:48.534843    4005 start.go:297] selected driver: qemu2
	I1009 12:40:48.534849    4005 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:40:48.534906    4005 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:40:48.537745    4005 cni.go:84] Creating CNI manager for ""
	I1009 12:40:48.537840    4005 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:40:48.537861    4005 start.go:340] cluster config:
	{Name:kubernetes-upgrade-134000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-134000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:40:48.542563    4005 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:40:48.546012    4005 out.go:177] * Starting "kubernetes-upgrade-134000" primary control-plane node in "kubernetes-upgrade-134000" cluster
	I1009 12:40:48.549986    4005 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:40:48.550003    4005 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:40:48.550011    4005 cache.go:56] Caching tarball of preloaded images
	I1009 12:40:48.550086    4005 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:40:48.550092    4005 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:40:48.550141    4005 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kubernetes-upgrade-134000/config.json ...
	I1009 12:40:48.550441    4005 start.go:360] acquireMachinesLock for kubernetes-upgrade-134000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:48.550488    4005 start.go:364] duration metric: took 40.792µs to acquireMachinesLock for "kubernetes-upgrade-134000"
	I1009 12:40:48.550497    4005 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:40:48.550501    4005 fix.go:54] fixHost starting: 
	I1009 12:40:48.550621    4005 fix.go:112] recreateIfNeeded on kubernetes-upgrade-134000: state=Stopped err=<nil>
	W1009 12:40:48.550631    4005 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:40:48.554997    4005 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-134000" ...
	I1009 12:40:48.562001    4005 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:48.562053    4005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:45:c3:8f:16:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:48.564360    4005 main.go:141] libmachine: STDOUT: 
	I1009 12:40:48.564378    4005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:48.564403    4005 fix.go:56] duration metric: took 13.9015ms for fixHost
	I1009 12:40:48.564407    4005 start.go:83] releasing machines lock for "kubernetes-upgrade-134000", held for 13.914709ms
	W1009 12:40:48.564413    4005 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:40:48.564456    4005 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:48.564461    4005 start.go:729] Will try again in 5 seconds ...
	I1009 12:40:53.564683    4005 start.go:360] acquireMachinesLock for kubernetes-upgrade-134000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:40:53.565263    4005 start.go:364] duration metric: took 497.834µs to acquireMachinesLock for "kubernetes-upgrade-134000"
	I1009 12:40:53.565373    4005 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:40:53.565389    4005 fix.go:54] fixHost starting: 
	I1009 12:40:53.566065    4005 fix.go:112] recreateIfNeeded on kubernetes-upgrade-134000: state=Stopped err=<nil>
	W1009 12:40:53.566091    4005 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:40:53.570521    4005 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-134000" ...
	I1009 12:40:53.580454    4005 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:40:53.580717    4005 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:45:c3:8f:16:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubernetes-upgrade-134000/disk.qcow2
	I1009 12:40:53.591828    4005 main.go:141] libmachine: STDOUT: 
	I1009 12:40:53.591895    4005 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:40:53.591983    4005 fix.go:56] duration metric: took 26.592458ms for fixHost
	I1009 12:40:53.592002    4005 start.go:83] releasing machines lock for "kubernetes-upgrade-134000", held for 26.711125ms
	W1009 12:40:53.592228    4005 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:40:53.601422    4005 out.go:201] 
	W1009 12:40:53.605474    4005 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:40:53.605509    4005 out.go:270] * 
	* 
	W1009 12:40:53.607670    4005 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:40:53.616386    4005 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-134000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-134000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-134000 version --output=json: exit status 1 (61.221083ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-134000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-09 12:40:53.691029 -0700 PDT m=+3319.555290168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-134000 -n kubernetes-upgrade-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-134000 -n kubernetes-upgrade-134000: exit status 7 (37.866375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-134000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-134000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-134000
--- FAIL: TestKubernetesUpgrade (17.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (600.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3662320434 start -p stopped-upgrade-220000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3662320434 start -p stopped-upgrade-220000 --memory=2200 --vm-driver=qemu2 : (1m3.046713875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3662320434 -p stopped-upgrade-220000 stop
E1009 12:41:56.061887    1686 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/functional-517000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3662320434 -p stopped-upgrade-220000 stop: (12.107753917s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-220000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-220000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m45.529157916s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-220000" primary control-plane node in "stopped-upgrade-220000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-220000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:42:01.707071    4045 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:42:01.707940    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:01.707947    4045 out.go:358] Setting ErrFile to fd 2...
	I1009 12:42:01.707951    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:42:01.708182    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:42:01.709912    4045 out.go:352] Setting JSON to false
	I1009 12:42:01.729577    4045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4291,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:42:01.729931    4045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:42:01.734268    4045 out.go:177] * [stopped-upgrade-220000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:42:01.742693    4045 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:42:01.742847    4045 notify.go:220] Checking for updates...
	I1009 12:42:01.751301    4045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:01.754253    4045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:42:01.757276    4045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:42:01.760259    4045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:42:01.763140    4045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:42:01.766584    4045 config.go:182] Loaded profile config "stopped-upgrade-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:01.770206    4045 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 12:42:01.771840    4045 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:42:01.776279    4045 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:42:01.783114    4045 start.go:297] selected driver: qemu2
	I1009 12:42:01.783141    4045 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:01.783192    4045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:42:01.786525    4045 cni.go:84] Creating CNI manager for ""
	I1009 12:42:01.786625    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:01.786791    4045 start.go:340] cluster config:
	{Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:01.787012    4045 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:42:01.791220    4045 out.go:177] * Starting "stopped-upgrade-220000" primary control-plane node in "stopped-upgrade-220000" cluster
	I1009 12:42:01.799205    4045 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:01.799228    4045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1009 12:42:01.799247    4045 cache.go:56] Caching tarball of preloaded images
	I1009 12:42:01.799323    4045 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:42:01.799328    4045 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1009 12:42:01.799382    4045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/config.json ...
	I1009 12:42:01.799832    4045 start.go:360] acquireMachinesLock for stopped-upgrade-220000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:42:01.799876    4045 start.go:364] duration metric: took 38.083µs to acquireMachinesLock for "stopped-upgrade-220000"
	I1009 12:42:01.799883    4045 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:42:01.799886    4045 fix.go:54] fixHost starting: 
	I1009 12:42:01.799988    4045 fix.go:112] recreateIfNeeded on stopped-upgrade-220000: state=Stopped err=<nil>
	W1009 12:42:01.799995    4045 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:42:01.807249    4045 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-220000" ...
	I1009 12:42:01.811470    4045 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:42:01.811558    4045 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53646-:22,hostfwd=tcp::53647-:2376,hostname=stopped-upgrade-220000 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/disk.qcow2
	I1009 12:42:01.857934    4045 main.go:141] libmachine: STDOUT: 
	I1009 12:42:01.857973    4045 main.go:141] libmachine: STDERR: 
	I1009 12:42:01.857980    4045 main.go:141] libmachine: Waiting for VM to start (ssh -p 53646 docker@127.0.0.1)...
	I1009 12:42:21.217943    4045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/config.json ...
	I1009 12:42:21.218360    4045 machine.go:93] provisionDockerMachine start ...
	I1009 12:42:21.218452    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.218755    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.218762    4045 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 12:42:21.279299    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 12:42:21.279325    4045 buildroot.go:166] provisioning hostname "stopped-upgrade-220000"
	I1009 12:42:21.279416    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.279532    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.279537    4045 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-220000 && echo "stopped-upgrade-220000" | sudo tee /etc/hostname
	I1009 12:42:21.339590    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-220000
	
	I1009 12:42:21.339661    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.339769    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.339778    4045 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-220000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-220000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-220000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 12:42:21.399658    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 12:42:21.399672    4045 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19780-1164/.minikube CaCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19780-1164/.minikube}
	I1009 12:42:21.399687    4045 buildroot.go:174] setting up certificates
	I1009 12:42:21.399691    4045 provision.go:84] configureAuth start
	I1009 12:42:21.399714    4045 provision.go:143] copyHostCerts
	I1009 12:42:21.399838    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem, removing ...
	I1009 12:42:21.399856    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem
	I1009 12:42:21.399972    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.pem (1078 bytes)
	I1009 12:42:21.400172    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem, removing ...
	I1009 12:42:21.400176    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem
	I1009 12:42:21.400228    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/cert.pem (1123 bytes)
	I1009 12:42:21.400342    4045 exec_runner.go:144] found /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem, removing ...
	I1009 12:42:21.400345    4045 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem
	I1009 12:42:21.400394    4045 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19780-1164/.minikube/key.pem (1679 bytes)
	I1009 12:42:21.400539    4045 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-220000 san=[127.0.0.1 localhost minikube stopped-upgrade-220000]
	I1009 12:42:21.505654    4045 provision.go:177] copyRemoteCerts
	I1009 12:42:21.506220    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 12:42:21.506231    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:21.535897    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 12:42:21.543173    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 12:42:21.550078    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 12:42:21.556616    4045 provision.go:87] duration metric: took 156.922083ms to configureAuth
	I1009 12:42:21.556625    4045 buildroot.go:189] setting minikube options for container-runtime
	I1009 12:42:21.556727    4045 config.go:182] Loaded profile config "stopped-upgrade-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:42:21.556776    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.556945    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.556950    4045 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 12:42:21.617218    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 12:42:21.617230    4045 buildroot.go:70] root file system type: tmpfs
	I1009 12:42:21.617288    4045 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 12:42:21.617348    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.617454    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.617487    4045 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 12:42:21.678077    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 12:42:21.678137    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:21.678245    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:21.678253    4045 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 12:42:22.059211    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 12:42:22.059230    4045 machine.go:96] duration metric: took 840.886708ms to provisionDockerMachine
	I1009 12:42:22.059237    4045 start.go:293] postStartSetup for "stopped-upgrade-220000" (driver="qemu2")
	I1009 12:42:22.059243    4045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 12:42:22.059328    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 12:42:22.059341    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:22.092143    4045 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 12:42:22.093459    4045 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 12:42:22.093466    4045 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/addons for local assets ...
	I1009 12:42:22.093557    4045 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19780-1164/.minikube/files for local assets ...
	I1009 12:42:22.093709    4045 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem -> 16862.pem in /etc/ssl/certs
	I1009 12:42:22.093918    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 12:42:22.096441    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:22.103646    4045 start.go:296] duration metric: took 44.404625ms for postStartSetup
	I1009 12:42:22.103660    4045 fix.go:56] duration metric: took 20.304353375s for fixHost
	I1009 12:42:22.103703    4045 main.go:141] libmachine: Using SSH client type: native
	I1009 12:42:22.103808    4045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d0a480] 0x100d0ccc0 <nil>  [] 0s} localhost 53646 <nil> <nil>}
	I1009 12:42:22.103814    4045 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 12:42:22.160929    4045 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728502942.516700962
	
	I1009 12:42:22.160938    4045 fix.go:216] guest clock: 1728502942.516700962
	I1009 12:42:22.160942    4045 fix.go:229] Guest: 2024-10-09 12:42:22.516700962 -0700 PDT Remote: 2024-10-09 12:42:22.103662 -0700 PDT m=+20.506754126 (delta=413.038962ms)
	I1009 12:42:22.160952    4045 fix.go:200] guest clock delta is within tolerance: 413.038962ms
	I1009 12:42:22.160955    4045 start.go:83] releasing machines lock for "stopped-upgrade-220000", held for 20.361656083s
	I1009 12:42:22.161042    4045 ssh_runner.go:195] Run: cat /version.json
	I1009 12:42:22.161056    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:42:22.161043    4045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 12:42:22.161123    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	W1009 12:42:22.161703    4045 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53878->127.0.0.1:53646: read: connection reset by peer
	I1009 12:42:22.161721    4045 retry.go:31] will retry after 304.87533ms: ssh: handshake failed: read tcp 127.0.0.1:53878->127.0.0.1:53646: read: connection reset by peer
	W1009 12:42:22.190184    4045 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 12:42:22.190255    4045 ssh_runner.go:195] Run: systemctl --version
	I1009 12:42:22.192057    4045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 12:42:22.193659    4045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 12:42:22.193694    4045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1009 12:42:22.196693    4045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1009 12:42:22.201542    4045 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 12:42:22.201549    4045 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.201682    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.208784    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1009 12:42:22.212295    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 12:42:22.215706    4045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.215741    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 12:42:22.219381    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.222572    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 12:42:22.225310    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 12:42:22.228338    4045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 12:42:22.231735    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 12:42:22.235137    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 12:42:22.238419    4045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 12:42:22.241479    4045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 12:42:22.244300    4045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 12:42:22.247579    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:22.327092    4045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 12:42:22.334434    4045 start.go:495] detecting cgroup driver to use...
	I1009 12:42:22.334679    4045 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 12:42:22.341263    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:22.347108    4045 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 12:42:22.355136    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 12:42:22.360466    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:22.365611    4045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 12:42:22.420580    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 12:42:22.427502    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 12:42:22.435644    4045 ssh_runner.go:195] Run: which cri-dockerd
	I1009 12:42:22.437706    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 12:42:22.442909    4045 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 12:42:22.450908    4045 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 12:42:22.540916    4045 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 12:42:22.619766    4045 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 12:42:22.619848    4045 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 12:42:22.626180    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:22.710867    4045 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:23.833609    4045 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.122753917s)
	I1009 12:42:23.833685    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1009 12:42:23.838797    4045 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1009 12:42:23.845649    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:23.850387    4045 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 12:42:23.929808    4045 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 12:42:24.010743    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:24.086133    4045 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 12:42:24.091656    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1009 12:42:24.096726    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:24.175413    4045 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1009 12:42:24.214829    4045 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 12:42:24.214931    4045 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 12:42:24.217704    4045 start.go:563] Will wait 60s for crictl version
	I1009 12:42:24.217769    4045 ssh_runner.go:195] Run: which crictl
	I1009 12:42:24.219047    4045 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 12:42:24.233587    4045 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1009 12:42:24.233665    4045 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:24.252327    4045 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 12:42:24.272692    4045 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1009 12:42:24.272807    4045 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1009 12:42:24.274424    4045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 12:42:24.278809    4045 kubeadm.go:883] updating cluster {Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1009 12:42:24.278859    4045 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1009 12:42:24.278924    4045 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:24.290514    4045 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:24.290524    4045 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:24.290587    4045 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:24.293984    4045 ssh_runner.go:195] Run: which lz4
	I1009 12:42:24.295594    4045 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 12:42:24.297369    4045 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 12:42:24.297394    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1009 12:42:25.284738    4045 docker.go:649] duration metric: took 989.218333ms to copy over tarball
	I1009 12:42:25.284813    4045 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 12:42:26.478324    4045 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.193525125s)
	I1009 12:42:26.478338    4045 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 12:42:26.495219    4045 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 12:42:26.499002    4045 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1009 12:42:26.504944    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:26.593535    4045 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 12:42:28.121175    4045 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.52766525s)
	I1009 12:42:28.121283    4045 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 12:42:28.134275    4045 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 12:42:28.134295    4045 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1009 12:42:28.134301    4045 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 12:42:28.141062    4045 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.142944    4045 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.144548    4045 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.144674    4045 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.145439    4045 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.145603    4045 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.147636    4045 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:28.147643    4045 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.149362    4045 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.149381    4045 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.150576    4045 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:28.150682    4045 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:28.151812    4045 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.152236    4045 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1009 12:42:28.152769    4045 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:28.154361    4045 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1009 12:42:28.620835    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.635738    4045 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1009 12:42:28.637242    4045 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1009 12:42:28.637296    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W1009 12:42:28.638380    4045 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:28.638482    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.649465    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1009 12:42:28.652274    4045 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1009 12:42:28.652295    4045 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.652349    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1009 12:42:28.664202    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1009 12:42:28.664355    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:28.666119    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1009 12:42:28.666136    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1009 12:42:28.704327    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.708840    4045 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1009 12:42:28.708861    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1009 12:42:28.722515    4045 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1009 12:42:28.722542    4045 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.722601    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1009 12:42:28.751517    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.765902    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1009 12:42:28.765939    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1009 12:42:28.766089    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:28.767491    4045 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1009 12:42:28.767510    4045 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.767559    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1009 12:42:28.768112    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1009 12:42:28.768124    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1009 12:42:28.790272    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1009 12:42:28.888792    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.904632    4045 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1009 12:42:28.904659    4045 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.904727    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1009 12:42:28.942238    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1009 12:42:28.996749    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.035764    4045 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1009 12:42:29.035789    4045 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.035864    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1009 12:42:29.038313    4045 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1009 12:42:29.038348    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1009 12:42:29.051219    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1009 12:42:29.061265    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1009 12:42:29.201753    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1009 12:42:29.201807    4045 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1009 12:42:29.201827    4045 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1009 12:42:29.201866    4045 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1009 12:42:29.212241    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1009 12:42:29.212374    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.214399    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1009 12:42:29.214411    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1009 12:42:29.223651    4045 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1009 12:42:29.223665    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1009 12:42:29.251543    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1009 12:42:32.043151    4045 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 12:42:32.043243    4045 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.054670    4045 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 12:42:32.054691    4045 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.054749    4045 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:42:32.069807    4045 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 12:42:32.069950    4045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 12:42:32.071288    4045 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1009 12:42:32.071304    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1009 12:42:32.100076    4045 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 12:42:32.100089    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1009 12:42:32.358408    4045 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 12:42:32.358447    4045 cache_images.go:92] duration metric: took 4.224259625s to LoadCachedImages
	W1009 12:42:32.358485    4045 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1009 12:42:32.358495    4045 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1009 12:42:32.358555    4045 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-220000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 12:42:32.358629    4045 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 12:42:32.372709    4045 cni.go:84] Creating CNI manager for ""
	I1009 12:42:32.372722    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:42:32.372729    4045 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 12:42:32.372737    4045 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-220000 NodeName:stopped-upgrade-220000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 12:42:32.372818    4045 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-220000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 12:42:32.372877    4045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1009 12:42:32.376234    4045 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 12:42:32.376283    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 12:42:32.379575    4045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1009 12:42:32.385288    4045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 12:42:32.390815    4045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1009 12:42:32.397410    4045 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1009 12:42:32.398808    4045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 12:42:32.403081    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:42:32.486911    4045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:42:32.493449    4045 certs.go:68] Setting up /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000 for IP: 10.0.2.15
	I1009 12:42:32.493457    4045 certs.go:194] generating shared ca certs ...
	I1009 12:42:32.493468    4045 certs.go:226] acquiring lock for ca certs: {Name:mkbf858b3b2074a12d126c3a2fed20f98f420e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.493618    4045 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key
	I1009 12:42:32.493678    4045 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key
	I1009 12:42:32.493685    4045 certs.go:256] generating profile certs ...
	I1009 12:42:32.494530    4045 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key
	I1009 12:42:32.494552    4045 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d
	I1009 12:42:32.494562    4045 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1009 12:42:32.538865    4045 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d ...
	I1009 12:42:32.538884    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d: {Name:mk636c31666e9b6925eca9992cc4574f1553d5aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.539323    4045 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d ...
	I1009 12:42:32.539332    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d: {Name:mkebc0ee7e2a420801c61f60a85aae3f650ed1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.539508    4045 certs.go:381] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt.dd5efd0d -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt
	I1009 12:42:32.539639    4045 certs.go:385] copying /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key.dd5efd0d -> /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key
	I1009 12:42:32.539891    4045 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.key
	I1009 12:42:32.540032    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem (1338 bytes)
	W1009 12:42:32.540058    4045 certs.go:480] ignoring /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686_empty.pem, impossibly tiny 0 bytes
	I1009 12:42:32.540065    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 12:42:32.540087    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem (1078 bytes)
	I1009 12:42:32.540108    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem (1123 bytes)
	I1009 12:42:32.540127    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/key.pem (1679 bytes)
	I1009 12:42:32.540168    4045 certs.go:484] found cert: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem (1708 bytes)
	I1009 12:42:32.540720    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 12:42:32.551874    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 12:42:32.559904    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 12:42:32.567688    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 12:42:32.575516    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 12:42:32.583003    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 12:42:32.590801    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 12:42:32.598147    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 12:42:32.604948    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/ssl/certs/16862.pem --> /usr/share/ca-certificates/16862.pem (1708 bytes)
	I1009 12:42:32.611975    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 12:42:32.619380    4045 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/1686.pem --> /usr/share/ca-certificates/1686.pem (1338 bytes)
	I1009 12:42:32.626852    4045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 12:42:32.632013    4045 ssh_runner.go:195] Run: openssl version
	I1009 12:42:32.634096    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 12:42:32.637260    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.638712    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.638740    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 12:42:32.640586    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 12:42:32.644240    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1686.pem && ln -fs /usr/share/ca-certificates/1686.pem /etc/ssl/certs/1686.pem"
	I1009 12:42:32.647645    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.649299    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:49 /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.649330    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1686.pem
	I1009 12:42:32.651099    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1686.pem /etc/ssl/certs/51391683.0"
	I1009 12:42:32.654172    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16862.pem && ln -fs /usr/share/ca-certificates/16862.pem /etc/ssl/certs/16862.pem"
	I1009 12:42:32.657261    4045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.658632    4045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:49 /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.658655    4045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16862.pem
	I1009 12:42:32.660414    4045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16862.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 12:42:32.663935    4045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 12:42:32.665488    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 12:42:32.667970    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 12:42:32.670111    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 12:42:32.672228    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 12:42:32.674237    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 12:42:32.676095    4045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 12:42:32.678001    4045 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53678 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-220000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1009 12:42:32.678087    4045 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.688536    4045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 12:42:32.691976    4045 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 12:42:32.691983    4045 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 12:42:32.692017    4045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 12:42:32.695570    4045 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 12:42:32.695840    4045 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-220000" does not appear in /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:42:32.696155    4045 kubeconfig.go:62] /Users/jenkins/minikube-integration/19780-1164/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-220000" cluster setting kubeconfig missing "stopped-upgrade-220000" context setting]
	I1009 12:42:32.696347    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:42:32.696783    4045 kapi.go:59] client config for stopped-upgrade-220000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027600f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:42:32.697263    4045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 12:42:32.700043    4045 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-220000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1009 12:42:32.700050    4045 kubeadm.go:1160] stopping kube-system containers ...
	I1009 12:42:32.700094    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 12:42:32.710966    4045 docker.go:483] Stopping containers: [fa75835cea07 feeebbcd5fb9 c89b00d98989 90a81eccf4ba 7ab74f2cae22 fa3ceaf6ef5a 2ea2df6dd5b5 1de8f5d61449]
	I1009 12:42:32.711041    4045 ssh_runner.go:195] Run: docker stop fa75835cea07 feeebbcd5fb9 c89b00d98989 90a81eccf4ba 7ab74f2cae22 fa3ceaf6ef5a 2ea2df6dd5b5 1de8f5d61449
	I1009 12:42:32.721745    4045 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 12:42:32.727454    4045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:42:32.730709    4045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:42:32.730715    4045 kubeadm.go:157] found existing configuration files:
	
	I1009 12:42:32.730751    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf
	I1009 12:42:32.733220    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:42:32.733252    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:42:32.736103    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf
	I1009 12:42:32.739311    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:42:32.739345    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:42:32.742420    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.744900    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:42:32.744925    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:42:32.747937    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf
	I1009 12:42:32.750981    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:42:32.751006    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:42:32.753657    4045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:42:32.756391    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:32.778474    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.412317    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.537292    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.560371    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 12:42:33.587475    4045 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:42:33.587554    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.090080    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:34.589593    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:35.089612    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:42:35.093856    4045 api_server.go:72] duration metric: took 1.506426666s to wait for apiserver process to appear ...
	I1009 12:42:35.093868    4045 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:42:35.093878    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:40.095831    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:40.095894    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:45.096146    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:45.096195    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:50.096780    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:50.096812    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:42:55.097647    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:42:55.097667    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:00.098376    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:00.098487    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:05.099808    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:05.099895    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:10.101117    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:10.101164    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:15.102916    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:15.102959    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:20.105100    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:20.105120    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:25.106452    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:25.106548    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:30.107489    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:30.107570    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:35.109896    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:35.109991    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:35.120754    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:35.120842    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:35.131533    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:35.131610    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:35.141808    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:35.141883    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:35.152485    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:35.152578    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:35.162770    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:35.162851    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:35.174137    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:35.174219    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:35.190665    4045 logs.go:282] 0 containers: []
	W1009 12:43:35.190679    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:35.190747    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:35.200999    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:35.201016    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:35.201021    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:35.227912    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:35.227921    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:35.232401    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:35.232408    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:35.246677    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:35.246690    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:35.276855    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:35.276865    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:35.289077    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:35.289089    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:35.305415    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:35.305426    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:35.323538    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:35.323548    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:35.417702    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:35.417712    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:35.432900    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:35.432911    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:35.450689    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:35.450699    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:35.476382    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:35.476394    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:35.489766    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:35.489778    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:35.530384    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:35.530396    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:35.547946    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:35.547956    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:35.559696    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:35.559709    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:38.073588    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:43.075939    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:43.076101    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:43.089317    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:43.089416    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:43.101036    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:43.101108    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:43.111776    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:43.111860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:43.122163    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:43.122253    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:43.132783    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:43.132863    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:43.143175    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:43.143245    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:43.153404    4045 logs.go:282] 0 containers: []
	W1009 12:43:43.153417    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:43.153487    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:43.164665    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:43.164683    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:43.164689    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:43.200746    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:43.200761    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:43.225556    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:43.225566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:43.239905    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:43.239919    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:43.251947    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:43.251957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:43.265986    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:43.265996    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:43.277682    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:43.277697    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:43.289226    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:43.289236    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:43.302283    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:43.302295    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:43.325989    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:43.325997    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:43.362758    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:43.362765    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:43.366590    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:43.366599    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:43.384168    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:43.384182    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:43.400799    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:43.400809    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:43.411922    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:43.411934    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:43.426908    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:43.426919    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:45.939937    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:50.942112    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:50.942266    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:50.952626    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:50.952697    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:50.970200    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:50.970290    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:50.980737    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:50.980810    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:50.991873    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:50.991951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:51.002145    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:51.002221    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:51.012482    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:51.012559    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:51.022703    4045 logs.go:282] 0 containers: []
	W1009 12:43:51.022713    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:51.022775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:51.036349    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:51.036367    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:51.036373    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:51.050356    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:51.050368    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:51.064926    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:51.064936    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:51.078365    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:51.078376    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:51.092770    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:51.092780    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:51.109984    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:51.109995    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:51.121461    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:51.121476    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:51.133296    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:51.133317    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:51.145755    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:51.145766    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:51.170930    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:51.170940    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:51.209759    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:51.209769    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:43:51.222431    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:51.222441    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:51.239728    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:51.239738    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:51.263645    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:51.263653    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:51.267959    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:51.267964    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:51.279748    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:51.279761    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:53.821440    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:43:58.823513    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:43:58.823638    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:43:58.835283    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:43:58.835373    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:43:58.846672    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:43:58.846755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:43:58.859854    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:43:58.859934    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:43:58.871710    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:43:58.871808    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:43:58.883484    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:43:58.883562    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:43:58.895613    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:43:58.895690    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:43:58.909613    4045 logs.go:282] 0 containers: []
	W1009 12:43:58.909626    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:43:58.909700    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:43:58.919445    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:43:58.919473    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:43:58.919479    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:43:58.958387    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:43:58.958403    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:43:58.994228    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:43:58.994238    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:43:59.008464    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:43:59.008475    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:43:59.023915    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:43:59.023927    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:43:59.046492    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:43:59.046510    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:43:59.061893    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:43:59.061903    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:43:59.073510    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:43:59.073522    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:43:59.085311    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:43:59.085323    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:43:59.114899    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:43:59.114911    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:43:59.128681    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:43:59.128692    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:43:59.147405    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:43:59.147415    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:43:59.173417    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:43:59.173425    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:43:59.177361    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:43:59.177369    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:43:59.188992    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:43:59.189002    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:43:59.200483    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:43:59.200496    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:01.713396    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:06.715321    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:06.715429    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:06.732907    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:06.732992    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:06.744191    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:06.744270    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:06.755664    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:06.755742    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:06.767078    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:06.767168    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:06.779185    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:06.779268    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:06.791082    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:06.791160    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:06.802129    4045 logs.go:282] 0 containers: []
	W1009 12:44:06.802143    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:06.802211    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:06.813302    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:06.813322    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:06.813327    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:06.854832    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:06.854849    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:06.882171    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:06.882183    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:06.896555    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:06.896565    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:06.907978    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:06.907987    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:06.931469    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:06.931475    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:06.943219    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:06.943228    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:06.978241    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:06.978256    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:06.992402    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:06.992416    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:07.004346    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:07.004357    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:07.018356    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:07.018366    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:07.029504    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:07.029514    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:07.033761    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:07.033768    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:07.047241    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:07.047252    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:07.058584    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:07.058596    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:07.073546    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:07.073556    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:09.593294    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:14.593765    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:14.593862    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:14.605813    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:14.605896    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:14.618313    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:14.618393    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:14.637433    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:14.637520    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:14.649104    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:14.649186    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:14.660108    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:14.660190    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:14.671544    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:14.671629    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:14.682727    4045 logs.go:282] 0 containers: []
	W1009 12:44:14.682738    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:14.682811    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:14.706684    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:14.706702    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:14.706708    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:14.719663    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:14.719676    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:14.760556    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:14.760566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:14.775382    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:14.775394    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:14.791079    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:14.791090    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:14.803321    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:14.803335    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:14.815901    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:14.815913    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:14.849713    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:14.849723    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:14.866925    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:14.866937    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:14.880570    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:14.880580    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:14.904999    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:14.905006    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:14.918988    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:14.918999    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:14.930831    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:14.930841    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:14.935597    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:14.935604    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:14.970196    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:14.970206    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:14.985661    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:14.985671    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:17.498024    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:22.499575    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:22.499688    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:22.510652    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:22.510734    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:22.521626    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:22.521704    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:22.532142    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:22.532230    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:22.543395    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:22.543476    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:22.555058    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:22.555145    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:22.566782    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:22.566867    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:22.577408    4045 logs.go:282] 0 containers: []
	W1009 12:44:22.577422    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:22.577496    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:22.592480    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:22.592501    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:22.592508    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:22.619369    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:22.619386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:22.631728    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:22.631739    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:22.644784    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:22.644796    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:22.666979    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:22.666989    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:22.685482    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:22.685495    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:22.701417    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:22.701429    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:22.742815    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:22.742830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:22.756356    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:22.756369    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:22.780856    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:22.780875    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:22.793101    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:22.793111    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:22.805598    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:22.805609    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:22.810221    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:22.810230    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:22.848203    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:22.848218    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:22.863160    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:22.863171    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:22.877588    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:22.877599    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:25.394240    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:30.396584    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:30.396766    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:30.414415    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:30.414510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:30.429458    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:30.429545    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:30.441686    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:30.441771    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:30.456780    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:30.456859    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:30.470400    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:30.470477    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:30.482614    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:30.482694    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:30.494546    4045 logs.go:282] 0 containers: []
	W1009 12:44:30.494561    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:30.494635    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:30.506137    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:30.506154    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:30.506159    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:30.522013    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:30.522026    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:30.534791    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:30.534803    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:30.546940    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:30.546951    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:30.572951    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:30.572965    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:30.614376    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:30.614390    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:30.652105    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:30.652116    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:30.679192    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:30.679203    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:30.700849    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:30.700864    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:30.713389    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:30.713401    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:30.730821    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:30.730837    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:30.745429    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:30.745443    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:30.758223    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:30.758237    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:30.763268    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:30.763276    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:30.778206    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:30.778216    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:30.808418    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:30.808434    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:33.323006    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:38.325307    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:38.325528    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:38.340226    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:38.340316    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:38.357540    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:38.357586    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:38.369136    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:38.369236    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:38.381043    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:38.381090    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:38.392244    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:38.392330    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:38.403136    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:38.403219    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:38.414739    4045 logs.go:282] 0 containers: []
	W1009 12:44:38.414749    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:38.414816    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:38.430083    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:38.430096    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:38.430100    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:38.445561    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:38.445573    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:38.460842    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:38.460854    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:38.479546    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:38.479557    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:38.502447    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:38.502460    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:38.527852    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:38.527867    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:38.568687    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:38.568703    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:38.601781    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:38.601797    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:38.618544    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:38.618557    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:38.632059    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:38.632071    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:38.669011    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:38.669022    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:38.683525    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:38.683538    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:38.687932    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:38.687942    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:38.702226    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:38.702237    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:38.714553    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:38.714565    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:38.728522    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:38.728540    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:41.243425    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:46.246024    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:46.246498    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:46.279564    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:46.279706    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:46.299076    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:46.299259    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:46.314895    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:46.314995    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:46.336659    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:46.336730    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:46.349514    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:46.349591    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:46.365929    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:46.366007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:46.377830    4045 logs.go:282] 0 containers: []
	W1009 12:44:46.377840    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:46.377907    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:46.389858    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:46.389876    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:46.389882    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:46.429634    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:46.429644    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:46.434709    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:46.434721    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:46.450312    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:46.450326    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:46.466960    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:46.466978    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:46.488757    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:46.488766    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:46.529641    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:46.529651    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:46.544050    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:46.544064    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:46.558156    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:46.558169    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:46.570035    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:46.570046    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:46.582695    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:46.582707    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:46.595695    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:46.595708    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:46.622282    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:46.622294    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:46.649224    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:46.649241    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:46.662245    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:46.662257    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:46.680770    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:46.680784    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:49.195702    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:44:54.197987    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:44:54.198194    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:44:54.211598    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:44:54.211695    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:44:54.223331    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:44:54.223412    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:44:54.234655    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:44:54.234729    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:44:54.245735    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:44:54.245812    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:44:54.256745    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:44:54.256821    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:44:54.268172    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:44:54.268250    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:44:54.284777    4045 logs.go:282] 0 containers: []
	W1009 12:44:54.284789    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:44:54.284861    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:44:54.296062    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:44:54.296080    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:44:54.296087    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:44:54.322638    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:44:54.322654    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:44:54.348163    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:44:54.348177    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:44:54.389235    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:44:54.389247    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:44:54.428039    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:44:54.428051    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:44:54.440713    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:44:54.440725    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:44:54.459504    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:44:54.459515    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:44:54.476804    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:44:54.476817    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:44:54.489429    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:44:54.489442    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:44:54.502571    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:44:54.502581    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:44:54.515416    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:44:54.515429    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:44:54.530393    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:44:54.530403    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:44:54.545748    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:44:54.545762    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:44:54.562600    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:44:54.562612    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:44:54.575573    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:44:54.575582    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:44:54.580373    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:44:54.580384    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:44:57.097817    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:02.099391    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:02.099601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:02.114654    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:02.114748    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:02.127506    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:02.127583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:02.139143    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:02.139211    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:02.150439    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:02.150512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:02.161939    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:02.162017    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:02.174058    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:02.174136    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:02.186054    4045 logs.go:282] 0 containers: []
	W1009 12:45:02.186061    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:02.186092    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:02.197521    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:02.197538    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:02.197543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:02.212844    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:02.212854    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:02.240134    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:02.240159    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:02.254915    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:02.254925    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:02.268384    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:02.268398    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:02.280559    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:02.280569    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:02.306688    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:02.306707    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:02.319849    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:02.319867    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:02.333165    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:02.333179    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:02.347669    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:02.347681    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:02.360132    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:02.360149    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:02.401622    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:02.401631    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:02.406608    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:02.406616    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:02.441982    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:02.441993    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:02.460971    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:02.460983    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:02.486296    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:02.486313    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:05.009541    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:10.011963    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:10.012579    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:10.052083    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:10.052225    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:10.074651    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:10.074775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:10.091373    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:10.091482    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:10.108462    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:10.108543    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:10.122028    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:10.122106    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:10.136621    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:10.136701    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:10.147949    4045 logs.go:282] 0 containers: []
	W1009 12:45:10.147959    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:10.148023    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:10.172379    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:10.172394    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:10.172399    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:10.188412    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:10.188424    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:10.203001    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:10.203014    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:10.215834    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:10.215846    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:10.231001    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:10.231017    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:10.246184    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:10.246198    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:10.271973    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:10.271991    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:10.276938    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:10.276947    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:10.293284    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:10.293296    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:10.331461    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:10.331479    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:10.369646    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:10.369658    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:10.385213    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:10.385223    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:10.404880    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:10.404893    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:10.422621    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:10.422633    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:10.441340    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:10.441358    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:10.454140    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:10.454152    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:12.996222    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:17.998874    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:17.999390    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:18.036246    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:18.036386    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:18.056664    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:18.056762    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:18.075089    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:18.075140    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:18.089339    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:18.089411    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:18.101199    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:18.101261    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:18.113164    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:18.113238    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:18.125265    4045 logs.go:282] 0 containers: []
	W1009 12:45:18.125275    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:18.125338    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:18.137332    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:18.137349    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:18.137356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:18.182268    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:18.182281    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:18.186892    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:18.186897    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:18.202866    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:18.202879    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:18.215768    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:18.215781    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:18.231938    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:18.231948    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:18.244459    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:18.244475    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:18.263043    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:18.263056    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:18.276722    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:18.276733    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:18.291398    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:18.291414    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:18.318537    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:18.318551    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:18.333791    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:18.333806    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:18.346233    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:18.346245    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:18.360808    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:18.360821    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:18.421517    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:18.421532    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:18.447447    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:18.447461    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:20.962796    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:25.965495    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:25.965950    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:25.999429    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:25.999563    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:26.019385    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:26.019492    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:26.034577    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:26.034662    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:26.047624    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:26.047713    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:26.063992    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:26.064069    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:26.081209    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:26.081287    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:26.092778    4045 logs.go:282] 0 containers: []
	W1009 12:45:26.092791    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:26.092860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:26.108195    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:26.108211    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:26.108217    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:26.150471    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:26.150490    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:26.163655    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:26.163667    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:26.187960    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:26.187973    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:26.192556    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:26.192566    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:26.210334    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:26.210343    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:26.225159    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:26.225167    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:26.241823    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:26.241838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:26.270605    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:26.270614    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:26.283042    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:26.283055    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:26.321595    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:26.321607    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:26.336829    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:26.336838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:26.351343    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:26.351355    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:26.377148    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:26.377165    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:26.388959    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:26.388970    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:26.401623    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:26.401635    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:28.916735    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:33.919178    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:33.919316    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:33.931145    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:33.931245    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:33.942816    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:33.942876    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:33.954144    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:33.954229    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:33.970997    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:33.971058    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:33.982951    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:33.983014    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:33.994186    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:33.994268    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:34.004773    4045 logs.go:282] 0 containers: []
	W1009 12:45:34.004786    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:34.004860    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:34.019518    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:34.019534    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:34.019541    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:34.061407    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:34.061419    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:34.076352    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:34.076371    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:34.091855    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:34.091872    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:34.108543    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:34.108555    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:34.122343    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:34.122356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:34.142550    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:34.142563    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:34.147120    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:34.147128    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:34.162985    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:34.163000    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:34.182355    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:34.182368    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:34.205513    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:34.205527    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:34.221944    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:34.221956    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:34.246346    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:34.246356    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:34.288282    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:34.288292    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:34.300181    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:34.300194    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:34.313441    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:34.313452    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:36.841002    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:41.843494    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:41.843680    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:41.855978    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:41.856061    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:41.866858    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:41.866943    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:41.877433    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:41.877510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:41.891439    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:41.891516    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:41.904869    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:41.904953    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:41.922479    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:41.922566    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:41.934221    4045 logs.go:282] 0 containers: []
	W1009 12:45:41.934234    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:41.934307    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:41.946257    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:41.946274    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:41.946279    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:41.950815    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:41.950830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:41.966050    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:41.966068    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:41.994768    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:41.994790    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:42.009876    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:42.009885    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:42.024484    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:42.024497    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:42.036946    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:42.036957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:42.055860    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:42.055874    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:42.069829    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:42.069844    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:42.111707    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:42.111722    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:42.151932    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:42.151945    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:42.168388    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:42.168401    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:42.183381    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:42.183393    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:42.207132    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:42.207150    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:42.223310    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:42.223327    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:42.237324    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:42.237336    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:44.752101    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:49.754360    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:49.754550    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:49.766566    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:49.766644    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:49.778162    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:49.778247    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:49.788670    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:49.788715    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:49.800275    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:49.800359    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:49.811704    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:49.811781    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:49.822996    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:49.823073    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:49.834227    4045 logs.go:282] 0 containers: []
	W1009 12:45:49.834241    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:49.834311    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:49.845198    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:49.845219    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:49.845225    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:49.850093    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:49.850104    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:49.865732    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:49.865744    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:49.884181    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:49.884189    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:49.896513    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:49.896524    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:49.912172    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:49.912187    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:49.927589    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:49.927602    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:49.940477    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:49.940488    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:49.953762    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:49.953772    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:49.968375    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:49.968386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:49.984629    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:49.984645    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:50.008491    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:50.008502    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:50.048061    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:50.048070    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:50.086774    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:50.086787    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:50.113047    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:50.113062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:45:50.125376    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:50.125388    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:52.645004    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:45:57.647200    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:45:57.647312    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:45:57.659183    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:45:57.659263    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:45:57.669866    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:45:57.669950    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:45:57.680701    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:45:57.680786    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:45:57.691518    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:45:57.691596    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:45:57.701743    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:45:57.701823    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:45:57.713427    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:45:57.713512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:45:57.724357    4045 logs.go:282] 0 containers: []
	W1009 12:45:57.724371    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:45:57.724443    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:45:57.735734    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:45:57.735750    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:45:57.735756    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:45:57.751304    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:45:57.751316    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:45:57.770537    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:45:57.770550    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:45:57.795741    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:45:57.795759    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:45:57.800943    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:45:57.800952    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:45:57.813825    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:45:57.813836    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:45:57.839724    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:45:57.839738    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:45:57.854318    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:45:57.854330    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:45:57.870683    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:45:57.870701    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:45:57.911935    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:45:57.911952    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:45:57.927298    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:45:57.927312    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:45:57.940835    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:45:57.940848    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:45:57.955262    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:45:57.955276    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:45:57.968226    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:45:57.968237    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:45:57.982445    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:45:57.982457    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:45:58.022029    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:45:58.022039    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:00.535875    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:05.538235    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:05.538834    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:05.586848    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:05.587007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:05.607015    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:05.607131    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:05.622737    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:05.622824    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:05.635812    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:05.635899    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:05.650496    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:05.650574    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:05.663267    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:05.663351    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:05.676109    4045 logs.go:282] 0 containers: []
	W1009 12:46:05.676121    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:05.676192    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:05.687855    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:05.687874    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:05.687880    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:05.714235    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:05.714252    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:05.729945    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:05.729957    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:05.744153    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:05.744163    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:05.786305    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:05.786320    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:05.791071    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:05.791079    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:05.831078    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:05.831090    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:05.845490    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:05.845506    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:05.861788    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:05.861797    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:05.880894    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:05.880906    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:05.906243    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:05.906255    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:05.919951    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:05.919966    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:05.932937    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:05.932950    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:05.948777    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:05.948789    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:05.962016    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:05.962030    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:05.977885    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:05.977900    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:08.490689    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:13.493104    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:13.493607    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:13.529731    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:13.529889    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:13.550287    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:13.550406    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:13.565595    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:13.565687    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:13.578860    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:13.578949    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:13.591960    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:13.592119    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:13.604033    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:13.604115    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:13.615411    4045 logs.go:282] 0 containers: []
	W1009 12:46:13.615424    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:13.615497    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:13.627518    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:13.627543    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:13.627550    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:13.665302    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:13.665314    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:13.680030    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:13.680040    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:13.695750    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:13.695764    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:13.700505    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:13.700516    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:13.713332    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:13.713344    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:13.726283    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:13.726295    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:13.752572    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:13.752584    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:13.768417    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:13.768431    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:13.786692    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:13.786701    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:13.811680    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:13.811693    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:13.832701    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:13.832712    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:13.845794    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:13.845806    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:13.867102    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:13.867111    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:13.881332    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:13.881341    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:13.894463    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:13.894472    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:16.438535    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:21.441219    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:21.441771    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:21.485198    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:21.485362    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:21.506236    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:21.506359    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:21.522198    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:21.522288    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:21.535642    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:21.535727    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:21.547434    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:21.547510    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:21.560044    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:21.560128    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:21.571812    4045 logs.go:282] 0 containers: []
	W1009 12:46:21.571824    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:21.571894    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:21.583990    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:21.584006    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:21.584011    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:21.597380    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:21.597392    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:21.620939    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:21.620959    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:21.637116    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:21.637132    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:21.651566    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:21.651575    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:21.666490    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:21.666501    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:21.679120    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:21.679134    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:21.692274    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:21.692289    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:21.733872    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:21.733894    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:21.791517    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:21.791533    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:21.808098    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:21.808115    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:21.827425    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:21.827435    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:21.847011    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:21.847024    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:21.859486    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:21.859499    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:21.864415    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:21.864424    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:21.880349    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:21.880360    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:24.408910    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:29.411139    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:29.411633    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:46:29.441667    4045 logs.go:282] 2 containers: [997d69e6cf17 7ab74f2cae22]
	I1009 12:46:29.441815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:46:29.460687    4045 logs.go:282] 2 containers: [8f9e9d90d51f fa75835cea07]
	I1009 12:46:29.460787    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:46:29.478603    4045 logs.go:282] 1 containers: [58edafb4ebe5]
	I1009 12:46:29.478681    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:46:29.490308    4045 logs.go:282] 2 containers: [f7a145523693 c89b00d98989]
	I1009 12:46:29.490375    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:46:29.501692    4045 logs.go:282] 1 containers: [fbc6dcf8f87b]
	I1009 12:46:29.501743    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:46:29.513035    4045 logs.go:282] 2 containers: [083f64598caf feeebbcd5fb9]
	I1009 12:46:29.513079    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:46:29.524555    4045 logs.go:282] 0 containers: []
	W1009 12:46:29.524564    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:46:29.524600    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:46:29.536529    4045 logs.go:282] 1 containers: [5dc06814468e]
	I1009 12:46:29.536544    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:46:29.536549    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:46:29.576518    4045 logs.go:123] Gathering logs for kube-apiserver [997d69e6cf17] ...
	I1009 12:46:29.576529    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 997d69e6cf17"
	I1009 12:46:29.595212    4045 logs.go:123] Gathering logs for etcd [8f9e9d90d51f] ...
	I1009 12:46:29.595223    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9e9d90d51f"
	I1009 12:46:29.613956    4045 logs.go:123] Gathering logs for kube-scheduler [f7a145523693] ...
	I1009 12:46:29.613972    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a145523693"
	I1009 12:46:29.626984    4045 logs.go:123] Gathering logs for kube-proxy [fbc6dcf8f87b] ...
	I1009 12:46:29.626994    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbc6dcf8f87b"
	I1009 12:46:29.639559    4045 logs.go:123] Gathering logs for kube-apiserver [7ab74f2cae22] ...
	I1009 12:46:29.639571    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ab74f2cae22"
	I1009 12:46:29.666155    4045 logs.go:123] Gathering logs for etcd [fa75835cea07] ...
	I1009 12:46:29.666171    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa75835cea07"
	I1009 12:46:29.681884    4045 logs.go:123] Gathering logs for coredns [58edafb4ebe5] ...
	I1009 12:46:29.681900    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58edafb4ebe5"
	I1009 12:46:29.694950    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:46:29.694964    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:46:29.718779    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:46:29.718792    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:46:29.756968    4045 logs.go:123] Gathering logs for kube-scheduler [c89b00d98989] ...
	I1009 12:46:29.756982    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c89b00d98989"
	I1009 12:46:29.773379    4045 logs.go:123] Gathering logs for kube-controller-manager [083f64598caf] ...
	I1009 12:46:29.773395    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083f64598caf"
	I1009 12:46:29.796743    4045 logs.go:123] Gathering logs for kube-controller-manager [feeebbcd5fb9] ...
	I1009 12:46:29.796759    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feeebbcd5fb9"
	I1009 12:46:29.812245    4045 logs.go:123] Gathering logs for storage-provisioner [5dc06814468e] ...
	I1009 12:46:29.812258    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dc06814468e"
	I1009 12:46:29.830157    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:46:29.830169    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:46:29.842719    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:46:29.842732    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:46:32.348470    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:37.351059    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:37.351142    4045 kubeadm.go:597] duration metric: took 4m4.666133542s to restartPrimaryControlPlane
	W1009 12:46:37.351208    4045 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 12:46:37.351239    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1009 12:46:38.502871    4045 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.151652583s)
	I1009 12:46:38.503399    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 12:46:38.509135    4045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 12:46:38.512199    4045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 12:46:38.515511    4045 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 12:46:38.515518    4045 kubeadm.go:157] found existing configuration files:
	
	I1009 12:46:38.515572    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf
	I1009 12:46:38.518723    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 12:46:38.518767    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 12:46:38.521711    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf
	I1009 12:46:38.524389    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 12:46:38.524430    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 12:46:38.527651    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.530886    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 12:46:38.530929    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 12:46:38.534524    4045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf
	I1009 12:46:38.537627    4045 kubeadm.go:163] "https://control-plane.minikube.internal:53678" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53678 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 12:46:38.537668    4045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 12:46:38.540538    4045 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 12:46:38.558873    4045 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1009 12:46:38.558987    4045 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 12:46:38.615815    4045 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 12:46:38.615964    4045 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 12:46:38.616114    4045 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 12:46:38.671944    4045 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 12:46:38.676173    4045 out.go:235]   - Generating certificates and keys ...
	I1009 12:46:38.676213    4045 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 12:46:38.676263    4045 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 12:46:38.676327    4045 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 12:46:38.676370    4045 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 12:46:38.676512    4045 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 12:46:38.676542    4045 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 12:46:38.676579    4045 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 12:46:38.676614    4045 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 12:46:38.676653    4045 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 12:46:38.676704    4045 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 12:46:38.676727    4045 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 12:46:38.676760    4045 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 12:46:38.803980    4045 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 12:46:38.901546    4045 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 12:46:39.007628    4045 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 12:46:39.060519    4045 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 12:46:39.091960    4045 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 12:46:39.092360    4045 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 12:46:39.092440    4045 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 12:46:39.178147    4045 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 12:46:39.182176    4045 out.go:235]   - Booting up control plane ...
	I1009 12:46:39.182233    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 12:46:39.182389    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 12:46:39.182522    4045 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 12:46:39.183101    4045 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 12:46:39.184958    4045 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 12:46:44.192189    4045 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007811 seconds
	I1009 12:46:44.192390    4045 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 12:46:44.199773    4045 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 12:46:44.709113    4045 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 12:46:44.709228    4045 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-220000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 12:46:45.214212    4045 kubeadm.go:310] [bootstrap-token] Using token: peo2jx.ukob1vaa9j8bqbc9
	I1009 12:46:45.218111    4045 out.go:235]   - Configuring RBAC rules ...
	I1009 12:46:45.218173    4045 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 12:46:45.218222    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 12:46:45.225494    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 12:46:45.227105    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 12:46:45.228405    4045 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 12:46:45.229899    4045 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 12:46:45.235170    4045 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 12:46:45.439861    4045 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 12:46:45.621393    4045 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 12:46:45.621727    4045 kubeadm.go:310] 
	I1009 12:46:45.621829    4045 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 12:46:45.621846    4045 kubeadm.go:310] 
	I1009 12:46:45.621960    4045 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 12:46:45.621967    4045 kubeadm.go:310] 
	I1009 12:46:45.621980    4045 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 12:46:45.622022    4045 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 12:46:45.622075    4045 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 12:46:45.622082    4045 kubeadm.go:310] 
	I1009 12:46:45.622115    4045 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 12:46:45.622120    4045 kubeadm.go:310] 
	I1009 12:46:45.622145    4045 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 12:46:45.622150    4045 kubeadm.go:310] 
	I1009 12:46:45.622178    4045 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 12:46:45.622219    4045 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 12:46:45.622259    4045 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 12:46:45.622261    4045 kubeadm.go:310] 
	I1009 12:46:45.622366    4045 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 12:46:45.622440    4045 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 12:46:45.622445    4045 kubeadm.go:310] 
	I1009 12:46:45.622485    4045 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token peo2jx.ukob1vaa9j8bqbc9 \
	I1009 12:46:45.622545    4045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e \
	I1009 12:46:45.622560    4045 kubeadm.go:310] 	--control-plane 
	I1009 12:46:45.622588    4045 kubeadm.go:310] 
	I1009 12:46:45.622634    4045 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 12:46:45.622641    4045 kubeadm.go:310] 
	I1009 12:46:45.622683    4045 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token peo2jx.ukob1vaa9j8bqbc9 \
	I1009 12:46:45.622738    4045 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1c4e4ec44b781a68ca46f8bfd40a0a18a0c059aef746ffd0961086a4187b698e 
	I1009 12:46:45.622802    4045 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 12:46:45.622816    4045 cni.go:84] Creating CNI manager for ""
	I1009 12:46:45.623070    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:46:45.626323    4045 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 12:46:45.634235    4045 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 12:46:45.637475    4045 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 12:46:45.642877    4045 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 12:46:45.642988    4045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-220000 minikube.k8s.io/updated_at=2024_10_09T12_46_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=stopped-upgrade-220000 minikube.k8s.io/primary=true
	I1009 12:46:45.643060    4045 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 12:46:45.696864    4045 ops.go:34] apiserver oom_adj: -16
	I1009 12:46:45.696920    4045 kubeadm.go:1113] duration metric: took 53.972292ms to wait for elevateKubeSystemPrivileges
	I1009 12:46:45.696936    4045 kubeadm.go:394] duration metric: took 4m13.026165709s to StartCluster
	I1009 12:46:45.696949    4045 settings.go:142] acquiring lock: {Name:mk60ce4ac2055fafaa579c122d2ddfc9feae1fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.697036    4045 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:46:45.697440    4045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/kubeconfig: {Name:mk4c1705278acf5bca231aaf8d903f2912375394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:46:45.697640    4045 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:46:45.697711    4045 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 12:46:45.697755    4045 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-220000"
	I1009 12:46:45.697763    4045 config.go:182] Loaded profile config "stopped-upgrade-220000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1009 12:46:45.697764    4045 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-220000"
	I1009 12:46:45.697777    4045 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-220000"
	W1009 12:46:45.697782    4045 addons.go:243] addon storage-provisioner should already be in state true
	I1009 12:46:45.697815    4045 host.go:66] Checking if "stopped-upgrade-220000" exists ...
	I1009 12:46:45.697831    4045 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-220000"
	I1009 12:46:45.698258    4045 retry.go:31] will retry after 723.618154ms: connect: dial unix /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/monitor: connect: connection refused
	I1009 12:46:45.698989    4045 kapi.go:59] client config for stopped-upgrade-220000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/stopped-upgrade-220000/client.key", CAFile:"/Users/jenkins/minikube-integration/19780-1164/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1027600f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 12:46:45.699124    4045 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-220000"
	W1009 12:46:45.699129    4045 addons.go:243] addon default-storageclass should already be in state true
	I1009 12:46:45.699140    4045 host.go:66] Checking if "stopped-upgrade-220000" exists ...
	I1009 12:46:45.699748    4045 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:45.699754    4045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 12:46:45.699760    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:46:45.701256    4045 out.go:177] * Verifying Kubernetes components...
	I1009 12:46:45.708262    4045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 12:46:45.798192    4045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 12:46:45.804123    4045 api_server.go:52] waiting for apiserver process to appear ...
	I1009 12:46:45.804186    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 12:46:45.808717    4045 api_server.go:72] duration metric: took 111.068584ms to wait for apiserver process to appear ...
	I1009 12:46:45.808727    4045 api_server.go:88] waiting for apiserver healthz status ...
	I1009 12:46:45.808736    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:45.828711    4045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 12:46:46.151615    4045 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 12:46:46.151627    4045 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 12:46:46.425756    4045 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 12:46:46.429674    4045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:46.429681    4045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 12:46:46.429688    4045 sshutil.go:53] new ssh client: &{IP:localhost Port:53646 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/stopped-upgrade-220000/id_rsa Username:docker}
	I1009 12:46:46.461584    4045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 12:46:50.810650    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:50.810695    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:46:55.810765    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:46:55.810815    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:00.811138    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:00.811161    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:05.811910    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:05.811945    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:10.812573    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:10.812608    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:15.813357    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:15.813383    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1009 12:47:16.153075    4045 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1009 12:47:16.157078    4045 out.go:177] * Enabled addons: storage-provisioner
	I1009 12:47:16.165241    4045 addons.go:510] duration metric: took 30.468458625s for enable addons: enabled=[storage-provisioner]
	I1009 12:47:20.814313    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:20.814333    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:25.815591    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:25.815627    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:30.817258    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:30.817306    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:35.819351    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:35.819375    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:40.821438    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:40.821472    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:45.821683    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:45.821772    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:45.834442    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:47:45.834530    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:45.846186    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:47:45.846271    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:45.857405    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:47:45.857491    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:45.868135    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:47:45.868214    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:45.880755    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:47:45.880838    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:45.892786    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:47:45.892865    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:45.904347    4045 logs.go:282] 0 containers: []
	W1009 12:47:45.904358    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:45.904429    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:45.915731    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:47:45.915747    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:47:45.915753    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:47:45.938098    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:47:45.938115    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:47:45.951712    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:47:45.951724    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:47:45.967702    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:47:45.967719    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:47:45.986024    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:45.986037    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:46.023069    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:47:46.023080    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:47:46.038371    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:47:46.038383    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:47:46.054551    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:47:46.054561    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:47:46.069320    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:47:46.069330    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:47:46.080589    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:46.080597    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:46.104922    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:47:46.104931    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:46.116653    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:46.116663    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:46.152306    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:46.152315    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:48.658512    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:47:53.660921    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:47:53.661022    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:47:53.672428    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:47:53.672516    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:47:53.683981    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:47:53.684058    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:47:53.694862    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:47:53.694941    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:47:53.708292    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:47:53.708377    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:47:53.719522    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:47:53.719601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:47:53.730935    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:47:53.731024    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:47:53.742104    4045 logs.go:282] 0 containers: []
	W1009 12:47:53.742119    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:47:53.742190    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:47:53.754221    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:47:53.754237    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:47:53.754243    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:47:53.758978    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:47:53.758986    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:47:53.774286    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:47:53.774297    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:47:53.786104    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:47:53.786117    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:47:53.798392    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:47:53.798404    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:47:53.817838    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:47:53.817849    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:47:53.830745    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:47:53.830757    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:47:53.864781    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:47:53.864790    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:47:53.898850    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:47:53.898861    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:47:53.914862    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:47:53.914872    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:47:53.927073    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:47:53.927084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:47:53.943036    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:47:53.943050    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:47:53.954714    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:47:53.954723    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:47:56.480562    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:01.481598    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:01.481687    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:01.493306    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:01.493387    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:01.504435    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:01.504515    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:01.516178    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:01.516261    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:01.529344    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:01.529423    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:01.540604    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:01.540690    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:01.556654    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:01.556739    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:01.567074    4045 logs.go:282] 0 containers: []
	W1009 12:48:01.567084    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:01.567152    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:01.578528    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:01.578542    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:01.578547    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:01.590333    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:01.590345    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:01.609384    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:01.609393    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:01.636064    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:01.636081    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:01.648356    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:01.648370    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:01.686380    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:01.686400    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:01.691401    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:01.691415    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:01.705844    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:01.705855    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:01.719784    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:01.719798    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:01.732985    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:01.732999    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:01.766935    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:01.766947    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:01.778625    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:01.778639    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:01.794081    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:01.794092    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:04.308556    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:09.308734    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:09.308841    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:09.319953    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:09.320034    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:09.330908    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:09.330994    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:09.342084    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:09.342176    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:09.354277    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:09.354362    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:09.365428    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:09.365512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:09.387403    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:09.387484    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:09.398244    4045 logs.go:282] 0 containers: []
	W1009 12:48:09.398255    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:09.398322    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:09.410329    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:09.410346    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:09.410352    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:09.436636    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:09.436657    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:09.458411    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:09.458432    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:09.478049    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:09.478065    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:09.497207    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:09.497216    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:09.509828    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:09.509838    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:09.522427    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:09.522437    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:09.538129    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:09.538141    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:09.551340    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:09.551352    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:09.564350    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:09.564359    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:09.601196    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:09.601207    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:09.606505    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:09.606512    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:09.640861    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:09.640873    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:12.158510    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:17.160558    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:17.160640    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:17.172486    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:17.172572    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:17.188121    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:17.188206    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:17.200061    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:17.200140    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:17.211250    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:17.211325    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:17.222453    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:17.222536    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:17.234166    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:17.234246    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:17.248241    4045 logs.go:282] 0 containers: []
	W1009 12:48:17.248254    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:17.248323    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:17.264429    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:17.264445    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:17.264450    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:17.284654    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:17.284666    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:17.297753    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:17.297765    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:17.316388    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:17.316401    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:17.354654    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:17.354671    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:17.359406    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:17.359416    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:17.396574    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:17.396591    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:17.411475    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:17.411488    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:17.424135    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:17.424146    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:17.436375    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:17.436385    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:17.462945    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:17.462956    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:17.476080    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:17.476092    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:17.491391    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:17.491406    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:20.008802    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:25.010850    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:25.010951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:25.022244    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:25.022324    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:25.033765    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:25.033911    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:25.047001    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:25.047075    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:25.058679    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:25.058755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:25.069879    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:25.069958    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:25.082478    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:25.082554    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:25.095803    4045 logs.go:282] 0 containers: []
	W1009 12:48:25.095814    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:25.095881    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:25.110889    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:25.110905    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:25.110910    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:25.136775    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:25.136790    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:25.150402    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:25.150415    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:25.188635    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:25.188646    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:25.227196    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:25.227213    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:25.239668    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:25.239681    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:25.256639    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:25.256653    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:25.268346    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:25.268356    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:25.286184    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:25.286195    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:25.291008    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:25.291018    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:25.306626    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:25.306640    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:25.321721    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:25.321729    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:25.335578    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:25.335588    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:27.849609    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:32.851685    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:32.851997    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:32.875240    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:32.875353    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:32.892005    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:32.892104    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:32.905957    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:32.906043    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:32.918155    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:32.918236    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:32.929492    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:32.929583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:32.940692    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:32.940775    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:32.952017    4045 logs.go:282] 0 containers: []
	W1009 12:48:32.952029    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:32.952101    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:32.963488    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:32.963504    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:32.963510    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:32.968421    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:32.968431    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:33.007553    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:33.007572    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:33.023609    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:33.023624    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:33.056468    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:33.056479    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:33.070090    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:33.070101    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:33.083402    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:33.083417    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:33.099752    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:33.099768    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:33.137959    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:33.137969    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:33.156648    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:33.156658    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:33.176032    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:33.176047    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:33.201811    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:33.201828    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:33.214264    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:33.214277    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:35.728042    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:40.730030    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:40.730157    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:40.755592    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:40.755675    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:40.766827    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:40.766903    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:40.780382    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:40.780459    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:40.792352    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:40.792427    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:40.803917    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:40.803995    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:40.815229    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:40.815306    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:40.826926    4045 logs.go:282] 0 containers: []
	W1009 12:48:40.826938    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:40.827007    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:40.838124    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:40.838141    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:40.838147    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:40.851377    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:40.851388    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:40.867397    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:40.867408    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:40.879559    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:40.879575    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:40.894490    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:40.894507    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:40.910852    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:40.910866    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:40.949957    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:40.949966    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:40.963238    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:40.963250    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:40.976316    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:40.976328    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:40.995685    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:40.995702    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:41.020720    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:41.020734    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:41.033253    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:41.033264    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:41.069253    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:41.069265    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:43.575867    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:48.577875    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:48.578032    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:48.593835    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:48.593887    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:48.606547    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:48.606591    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:48.618138    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:48.618185    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:48.630484    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:48.630564    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:48.641954    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:48.642036    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:48.654219    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:48.654296    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:48.666236    4045 logs.go:282] 0 containers: []
	W1009 12:48:48.666250    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:48.666319    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:48.678121    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:48.678136    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:48.678141    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:48.692726    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:48.692743    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:48.706167    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:48.706179    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:48.729569    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:48.729580    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:48.742113    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:48.742125    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:48.758428    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:48.758440    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:48.772504    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:48.772516    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:48.808286    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:48.808302    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:48.813289    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:48.813301    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:48.852732    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:48.852741    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:48.869038    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:48.869050    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:48.887922    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:48.887931    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:48.915907    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:48.915919    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:51.429310    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:48:56.431479    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:48:56.431821    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:48:56.463927    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:48:56.464055    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:48:56.482505    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:48:56.482604    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:48:56.498361    4045 logs.go:282] 2 containers: [121389e68477 90c3b04e0c6e]
	I1009 12:48:56.498451    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:48:56.515441    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:48:56.515508    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:48:56.527359    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:48:56.527448    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:48:56.540763    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:48:56.540848    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:48:56.552758    4045 logs.go:282] 0 containers: []
	W1009 12:48:56.552765    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:48:56.552794    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:48:56.565270    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:48:56.565285    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:48:56.565291    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:48:56.604294    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:48:56.604319    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:48:56.632068    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:48:56.632084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:48:56.673119    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:48:56.673136    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:48:56.703236    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:48:56.703250    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:48:56.733695    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:48:56.733707    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:48:56.768644    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:48:56.768665    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:48:56.775676    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:48:56.775688    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:48:56.862675    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:48:56.862688    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:48:56.878819    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:48:56.878830    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:48:56.891529    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:48:56.891543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:48:56.908379    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:48:56.908394    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:48:56.928034    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:48:56.928047    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:48:59.442525    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:04.444564    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:04.444758    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:04.471231    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:04.471335    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:04.489151    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:04.489249    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:04.503722    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:04.503815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:04.521173    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:04.521256    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:04.533468    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:04.533512    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:04.545283    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:04.545341    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:04.557044    4045 logs.go:282] 0 containers: []
	W1009 12:49:04.557055    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:04.557123    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:04.571142    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:04.571163    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:04.571168    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:04.587599    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:04.587611    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:04.604874    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:04.604885    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:04.624135    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:04.624153    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:04.651737    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:04.651751    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:04.675006    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:04.675018    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:04.680241    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:04.680251    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:04.693516    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:04.693529    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:04.710060    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:04.710074    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:04.722878    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:04.722891    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:04.735448    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:04.735461    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:04.775758    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:04.775771    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:04.791929    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:04.791946    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:04.805379    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:04.805391    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:04.841979    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:04.841988    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:07.356182    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:12.358248    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:12.358340    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:12.370867    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:12.370947    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:12.384449    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:12.384533    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:12.397437    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:12.397520    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:12.409753    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:12.409834    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:12.422267    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:12.422348    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:12.433802    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:12.433883    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:12.445500    4045 logs.go:282] 0 containers: []
	W1009 12:49:12.445511    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:12.445583    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:12.458015    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:12.458034    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:12.458039    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:12.475887    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:12.475903    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:12.489684    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:12.489700    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:12.502619    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:12.502631    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:12.520330    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:12.520342    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:12.539083    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:12.539093    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:12.552690    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:12.552698    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:12.566518    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:12.566528    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:12.582208    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:12.582221    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:12.610789    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:12.610804    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:12.651592    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:12.651606    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:12.667713    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:12.667725    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:12.688130    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:12.688140    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:12.701264    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:12.701273    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:12.738281    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:12.738293    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:15.244814    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:20.246268    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:20.246433    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:20.266174    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:20.266266    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:20.281759    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:20.281811    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:20.295150    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:20.295221    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:20.310871    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:20.310951    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:20.323723    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:20.323806    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:20.336768    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:20.336851    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:20.350023    4045 logs.go:282] 0 containers: []
	W1009 12:49:20.350036    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:20.350107    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:20.363039    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:20.363057    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:20.363062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:20.376413    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:20.376423    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:20.413806    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:20.413817    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:20.427470    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:20.427482    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:20.440868    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:20.440880    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:20.479155    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:20.479170    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:20.498501    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:20.498512    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:20.511531    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:20.511543    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:20.525135    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:20.525151    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:20.539836    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:20.539849    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:20.553651    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:20.553664    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:20.558228    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:20.558244    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:20.574562    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:20.574569    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:20.591822    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:20.591831    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:20.610976    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:20.610988    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:23.138829    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:28.140954    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:28.141146    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:28.164755    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:28.164850    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:28.179592    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:28.179680    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:28.192281    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:28.192371    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:28.205331    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:28.205378    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:28.217513    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:28.217558    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:28.229605    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:28.229671    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:28.244761    4045 logs.go:282] 0 containers: []
	W1009 12:49:28.244773    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:28.244845    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:28.256502    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:28.256520    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:28.256525    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:28.282377    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:28.282394    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:28.296564    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:28.296575    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:28.301091    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:28.301100    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:28.317201    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:28.317212    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:28.336704    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:28.336717    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:28.365831    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:28.365843    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:28.379132    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:28.379155    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:28.392497    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:28.392510    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:28.407256    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:28.407266    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:28.443788    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:28.443812    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:28.483265    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:28.483277    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:28.501931    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:28.501941    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:28.524283    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:28.524294    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:28.548806    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:28.548817    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:31.062386    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:36.064473    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:36.064585    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:36.081439    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:36.081518    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:36.093119    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:36.093205    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:36.104728    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:36.104772    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:36.115798    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:36.115870    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:36.127492    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:36.127574    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:36.138750    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:36.138835    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:36.149757    4045 logs.go:282] 0 containers: []
	W1009 12:49:36.149768    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:36.149838    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:36.161173    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:36.161191    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:36.161196    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:36.176572    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:36.176585    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:36.189670    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:36.189682    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:36.205560    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:36.205573    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:36.218287    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:36.218299    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:36.232781    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:36.232792    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:36.270604    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:36.270612    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:36.288773    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:36.288784    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:36.311362    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:36.311374    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:36.324374    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:36.324387    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:36.351036    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:36.351046    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:36.366349    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:36.366360    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:36.379050    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:36.379063    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:36.384052    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:36.384060    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:36.396424    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:36.396441    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:38.932053    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:43.928913    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:43.929001    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:43.940727    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:43.940807    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:43.951864    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:43.951940    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:43.964971    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:43.965050    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:43.976970    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:43.977049    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:43.988059    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:43.988135    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:43.999369    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:43.999474    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:44.011312    4045 logs.go:282] 0 containers: []
	W1009 12:49:44.011322    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:44.011391    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:44.022799    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:44.022821    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:44.022827    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:44.036583    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:44.036594    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:44.054870    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:44.054881    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:44.095801    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:44.095813    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:44.111378    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:44.111386    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:44.124025    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:44.124041    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:44.136429    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:44.136445    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:44.149865    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:44.149877    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:44.186493    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:44.186511    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:44.207327    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:44.207387    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:44.221486    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:44.221502    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:44.242416    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:44.242429    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:44.268294    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:44.268305    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:44.272406    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:44.272412    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:44.284460    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:44.284474    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:46.796594    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:51.795577    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:51.795654    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:51.807207    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:51.807285    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:51.823996    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:51.824039    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:51.835869    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:51.835945    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:51.847378    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:51.847458    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:51.859101    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:51.859181    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:51.870165    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:51.870242    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:51.881046    4045 logs.go:282] 0 containers: []
	W1009 12:49:51.881058    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:51.881129    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:51.892841    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:51.892860    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:51.892865    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:51.905120    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:51.905131    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:51.917874    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:51.917886    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:51.954130    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:51.954142    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:51.959096    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:51.959107    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:51.974966    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:51.974978    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:51.993805    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:49:51.993813    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:49:52.009057    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:52.009068    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:52.024146    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:52.024158    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:52.036659    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:52.036670    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:52.052988    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:52.052998    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:52.080122    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:52.080134    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:52.093097    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:52.093109    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:52.105939    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:52.105951    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:49:52.141872    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:52.141884    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:54.655325    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:49:59.655708    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:49:59.655795    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:49:59.671776    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:49:59.671867    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:49:59.687769    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:49:59.687839    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:49:59.699146    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:49:59.699225    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:49:59.710966    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:49:59.711041    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:49:59.722474    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:49:59.722571    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:49:59.735713    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:49:59.735794    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:49:59.746560    4045 logs.go:282] 0 containers: []
	W1009 12:49:59.746572    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:49:59.746643    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:49:59.763235    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:49:59.763256    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:49:59.763261    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:49:59.776644    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:49:59.776655    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:49:59.800108    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:49:59.800121    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:49:59.826911    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:49:59.826928    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:49:59.839397    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:49:59.839412    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:49:59.843957    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:49:59.843968    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:49:59.859680    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:49:59.859695    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:49:59.875230    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:49:59.875241    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:49:59.891709    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:49:59.891717    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:49:59.904427    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:49:59.904435    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:49:59.917573    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:49:59.917584    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:49:59.943798    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:49:59.943811    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:49:59.956613    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:49:59.956624    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:49:59.993440    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:49:59.993457    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:00.030163    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:00.030176    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:02.545760    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:07.546842    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:07.546986    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:07.561992    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:07.562085    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:07.574864    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:07.574946    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:07.586465    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:07.586558    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:07.597693    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:07.597774    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:07.609369    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:07.609451    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:07.621154    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:07.621233    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:07.632679    4045 logs.go:282] 0 containers: []
	W1009 12:50:07.632692    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:07.632762    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:07.644514    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:07.644533    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:07.644538    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:07.662717    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:07.662734    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:07.702880    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:07.702894    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:07.719158    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:07.719173    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:07.744406    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:07.744415    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:07.757565    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:07.757577    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:07.769832    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:07.769844    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:07.783192    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:07.783204    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:07.797073    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:07.797084    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:07.809839    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:07.809851    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:07.814639    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:07.814646    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:07.852482    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:07.852497    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:07.868927    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:07.868940    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:07.884235    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:07.884246    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:07.901882    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:07.901893    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:10.416929    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:15.418557    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:15.418755    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:15.443741    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:15.443850    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:15.461338    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:15.461435    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:15.476512    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:15.476601    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:15.490210    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:15.490285    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:15.504040    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:15.504121    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:15.515587    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:15.515667    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:15.531124    4045 logs.go:282] 0 containers: []
	W1009 12:50:15.531137    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:15.531208    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:15.542688    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:15.542710    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:15.542716    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:15.555917    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:15.555930    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:15.581735    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:15.581750    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:15.605312    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:15.605321    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:15.621819    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:15.621839    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:15.645244    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:15.645255    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:15.683228    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:15.683241    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:15.730080    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:15.730094    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:15.747985    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:15.747999    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:15.760348    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:15.760361    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:15.764960    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:15.764971    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:15.780414    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:15.780425    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:15.793165    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:15.793178    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:15.808261    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:15.808272    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:15.820348    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:15.820359    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:18.333947    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:23.335611    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:23.335731    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:23.350065    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:23.350149    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:23.361665    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:23.361740    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:23.377492    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:23.377552    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:23.388752    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:23.388815    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:23.400141    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:23.400213    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:23.411694    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:23.411758    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:23.422359    4045 logs.go:282] 0 containers: []
	W1009 12:50:23.422369    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:23.422435    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:23.435171    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:23.435188    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:23.435194    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:23.447919    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:23.447931    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:23.467627    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:23.467638    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:23.480227    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:23.480243    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:23.495028    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:23.495040    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:23.511729    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:23.511746    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:23.524589    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:23.524602    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:23.543056    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:23.543068    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:23.581492    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:23.581505    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:23.594376    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:23.594389    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:23.607120    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:23.607127    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:23.631149    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:23.631167    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:23.658522    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:23.658535    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:23.674414    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:23.674425    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:23.711682    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:23.711693    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:26.218199    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:31.218863    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:31.218922    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:31.230994    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:31.231039    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:31.246888    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:31.246974    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:31.258595    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:31.258675    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:31.269871    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:31.269949    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:31.281049    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:31.281103    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:31.293163    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:31.293239    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:31.304513    4045 logs.go:282] 0 containers: []
	W1009 12:50:31.304524    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:31.304593    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:31.320270    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:31.320288    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:31.320292    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:31.356772    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:31.356780    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:31.369270    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:31.369279    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:31.387907    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:31.387922    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:31.414347    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:31.414365    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:31.452800    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:31.452819    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:31.465261    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:31.465275    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:31.481697    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:31.481716    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:31.494629    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:31.494640    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:31.510749    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:31.510766    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:31.523787    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:31.523800    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:31.528866    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:31.528876    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:31.543987    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:31.544000    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:31.557173    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:31.557187    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:31.571185    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:31.571199    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:34.085057    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:39.086955    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:39.087158    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 12:50:39.110623    4045 logs.go:282] 1 containers: [f6fbaf1c33c9]
	I1009 12:50:39.110685    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 12:50:39.124113    4045 logs.go:282] 1 containers: [87bb4f51e3cc]
	I1009 12:50:39.124166    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 12:50:39.136360    4045 logs.go:282] 4 containers: [42ded8f55b11 81471a897a78 121389e68477 90c3b04e0c6e]
	I1009 12:50:39.136415    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 12:50:39.148153    4045 logs.go:282] 1 containers: [d5eda9c56e13]
	I1009 12:50:39.148203    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 12:50:39.159452    4045 logs.go:282] 1 containers: [8a954059794e]
	I1009 12:50:39.159497    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 12:50:39.175363    4045 logs.go:282] 1 containers: [396cfeca9331]
	I1009 12:50:39.175433    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 12:50:39.187123    4045 logs.go:282] 0 containers: []
	W1009 12:50:39.187134    4045 logs.go:284] No container was found matching "kindnet"
	I1009 12:50:39.187201    4045 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 12:50:39.198919    4045 logs.go:282] 1 containers: [ffcd4983c17e]
	I1009 12:50:39.198938    4045 logs.go:123] Gathering logs for etcd [87bb4f51e3cc] ...
	I1009 12:50:39.198944    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bb4f51e3cc"
	I1009 12:50:39.213492    4045 logs.go:123] Gathering logs for coredns [42ded8f55b11] ...
	I1009 12:50:39.213504    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ded8f55b11"
	I1009 12:50:39.226757    4045 logs.go:123] Gathering logs for kube-scheduler [d5eda9c56e13] ...
	I1009 12:50:39.226770    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5eda9c56e13"
	I1009 12:50:39.245573    4045 logs.go:123] Gathering logs for kube-proxy [8a954059794e] ...
	I1009 12:50:39.245590    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a954059794e"
	I1009 12:50:39.258429    4045 logs.go:123] Gathering logs for describe nodes ...
	I1009 12:50:39.258443    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 12:50:39.298955    4045 logs.go:123] Gathering logs for kube-apiserver [f6fbaf1c33c9] ...
	I1009 12:50:39.298968    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6fbaf1c33c9"
	I1009 12:50:39.314452    4045 logs.go:123] Gathering logs for coredns [90c3b04e0c6e] ...
	I1009 12:50:39.314464    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c3b04e0c6e"
	I1009 12:50:39.326866    4045 logs.go:123] Gathering logs for dmesg ...
	I1009 12:50:39.326878    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 12:50:39.331712    4045 logs.go:123] Gathering logs for kube-controller-manager [396cfeca9331] ...
	I1009 12:50:39.331723    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396cfeca9331"
	I1009 12:50:39.350387    4045 logs.go:123] Gathering logs for storage-provisioner [ffcd4983c17e] ...
	I1009 12:50:39.350399    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffcd4983c17e"
	I1009 12:50:39.363438    4045 logs.go:123] Gathering logs for container status ...
	I1009 12:50:39.363449    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 12:50:39.376124    4045 logs.go:123] Gathering logs for kubelet ...
	I1009 12:50:39.376136    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 12:50:39.414038    4045 logs.go:123] Gathering logs for coredns [81471a897a78] ...
	I1009 12:50:39.414063    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81471a897a78"
	I1009 12:50:39.428054    4045 logs.go:123] Gathering logs for coredns [121389e68477] ...
	I1009 12:50:39.428062    4045 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 121389e68477"
	I1009 12:50:39.440846    4045 logs.go:123] Gathering logs for Docker ...
	I1009 12:50:39.440856    4045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 12:50:41.966948    4045 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1009 12:50:46.968846    4045 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1009 12:50:46.988503    4045 out.go:201] 
	W1009 12:50:46.997504    4045 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1009 12:50:46.997519    4045 out.go:270] * 
	* 
	W1009 12:50:46.998209    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:50:47.012515    4045 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-220000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (600.80s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-527000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-527000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.851399167s)

                                                
                                                
-- stdout --
	* [pause-527000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-527000" primary control-plane node in "pause-527000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-527000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-527000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-527000 -n pause-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-527000 -n pause-527000: exit status 7 (76.517916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (11.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 : exit status 80 (11.570096083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-206000" primary control-plane node in "NoKubernetes-206000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-206000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-206000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000: exit status 7 (60.374458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (11.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 : exit status 80 (7.475915583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-206000
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-206000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000: exit status 7 (61.484167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 : exit status 80 (7.65155625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-206000
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-206000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000: exit status 7 (38.667958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19780
- KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2931687221/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19780
- KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1183187998/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 
I1009 12:51:51.178486    1686 install.go:79] stdout: 
W1009 12:51:51.178715    1686 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1009 12:51:51.178743    1686 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit]
I1009 12:51:51.195994    1686 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit]
I1009 12:51:51.208666    1686 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit]
I1009 12:51:51.219957    1686 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit]
I1009 12:51:51.241003    1686 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 12:51:51.241093    1686 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1009 12:51:53.047456    1686 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1009 12:51:53.047480    1686 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1009 12:51:53.047535    1686 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1009 12:51:53.047570    1686 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit
I1009 12:51:53.438593    1686 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0] Decompressors:map[bz2:0x1400091ce20 gz:0x1400091ce28 tar:0x1400091cdd0 tar.bz2:0x1400091cde0 tar.gz:0x1400091cdf0 tar.xz:0x1400091ce00 tar.zst:0x1400091ce10 tbz2:0x1400091cde0 tgz:0x1400091cdf0 txz:0x1400091ce00 tzst:0x1400091ce10 xz:0x1400091ce30 zip:0x1400091ce40 zst:0x1400091ce38] Getters:map[file:0x14000687250 http:0x140009121e0 https:0x14000912230] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1009 12:51:53.438717    1686 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/002/docker-machine-driver-hyperkit
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 : exit status 80 (5.29454425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-206000
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-206000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-206000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-206000 -n NoKubernetes-206000: exit status 7 (71.531792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.865352458s)

                                                
                                                
-- stdout --
	* [auto-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-311000" primary control-plane node in "auto-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:52:27.948268    4436 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:52:27.948428    4436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:27.948431    4436 out.go:358] Setting ErrFile to fd 2...
	I1009 12:52:27.948434    4436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:27.948554    4436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:52:27.949657    4436 out.go:352] Setting JSON to false
	I1009 12:52:27.967100    4436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4917,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:52:27.967179    4436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:52:27.972323    4436 out.go:177] * [auto-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:52:27.980379    4436 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:52:27.980435    4436 notify.go:220] Checking for updates...
	I1009 12:52:27.986246    4436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:52:27.989312    4436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:52:27.992197    4436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:52:27.995273    4436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:52:27.998283    4436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:52:28.001589    4436 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:28.001669    4436 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:28.001719    4436 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:52:28.006228    4436 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:52:28.012237    4436 start.go:297] selected driver: qemu2
	I1009 12:52:28.012245    4436 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:52:28.012258    4436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:52:28.014746    4436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:52:28.018194    4436 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:52:28.021374    4436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:52:28.021410    4436 cni.go:84] Creating CNI manager for ""
	I1009 12:52:28.021432    4436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:52:28.021439    4436 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:52:28.021473    4436 start.go:340] cluster config:
	{Name:auto-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:52:28.026063    4436 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:52:28.034233    4436 out.go:177] * Starting "auto-311000" primary control-plane node in "auto-311000" cluster
	I1009 12:52:28.038274    4436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:52:28.038295    4436 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:52:28.038306    4436 cache.go:56] Caching tarball of preloaded images
	I1009 12:52:28.038393    4436 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:52:28.038399    4436 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:52:28.038464    4436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/auto-311000/config.json ...
	I1009 12:52:28.038475    4436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/auto-311000/config.json: {Name:mk2759e1bb006ad73674eae2c47488e50cb19ddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:52:28.038871    4436 start.go:360] acquireMachinesLock for auto-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:28.038918    4436 start.go:364] duration metric: took 41.833µs to acquireMachinesLock for "auto-311000"
	I1009 12:52:28.038928    4436 start.go:93] Provisioning new machine with config: &{Name:auto-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:28.038953    4436 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:28.047302    4436 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:28.064606    4436 start.go:159] libmachine.API.Create for "auto-311000" (driver="qemu2")
	I1009 12:52:28.064634    4436 client.go:168] LocalClient.Create starting
	I1009 12:52:28.064699    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:28.064738    4436 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:28.064746    4436 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:28.064782    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:28.064810    4436 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:28.064820    4436 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:28.065173    4436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:28.224927    4436 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:28.351275    4436 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:28.351281    4436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:28.351477    4436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:28.361406    4436 main.go:141] libmachine: STDOUT: 
	I1009 12:52:28.361422    4436 main.go:141] libmachine: STDERR: 
	I1009 12:52:28.361478    4436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2 +20000M
	I1009 12:52:28.369846    4436 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:28.369861    4436 main.go:141] libmachine: STDERR: 
	I1009 12:52:28.369877    4436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:28.369884    4436 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:28.369896    4436 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:28.369930    4436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d3:21:d0:46:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:28.371733    4436 main.go:141] libmachine: STDOUT: 
	I1009 12:52:28.371745    4436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:28.371764    4436 client.go:171] duration metric: took 307.133375ms to LocalClient.Create
	I1009 12:52:30.373885    4436 start.go:128] duration metric: took 2.334987917s to createHost
	I1009 12:52:30.373983    4436 start.go:83] releasing machines lock for "auto-311000", held for 2.335132625s
	W1009 12:52:30.374054    4436 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:30.386266    4436 out.go:177] * Deleting "auto-311000" in qemu2 ...
	W1009 12:52:30.413928    4436 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:30.413953    4436 start.go:729] Will try again in 5 seconds ...
	I1009 12:52:35.416043    4436 start.go:360] acquireMachinesLock for auto-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:35.416603    4436 start.go:364] duration metric: took 452.166µs to acquireMachinesLock for "auto-311000"
	I1009 12:52:35.416727    4436 start.go:93] Provisioning new machine with config: &{Name:auto-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:35.417057    4436 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:35.430552    4436 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:35.480317    4436 start.go:159] libmachine.API.Create for "auto-311000" (driver="qemu2")
	I1009 12:52:35.480382    4436 client.go:168] LocalClient.Create starting
	I1009 12:52:35.480511    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:35.480600    4436 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:35.480616    4436 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:35.480689    4436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:35.480744    4436 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:35.480762    4436 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:35.481521    4436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:35.650887    4436 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:35.711853    4436 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:35.711858    4436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:35.712066    4436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:35.722131    4436 main.go:141] libmachine: STDOUT: 
	I1009 12:52:35.722165    4436 main.go:141] libmachine: STDERR: 
	I1009 12:52:35.722222    4436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2 +20000M
	I1009 12:52:35.730790    4436 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:35.730804    4436 main.go:141] libmachine: STDERR: 
	I1009 12:52:35.730821    4436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:35.730827    4436 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:35.730836    4436 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:35.730862    4436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c4:e2:91:01:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/auto-311000/disk.qcow2
	I1009 12:52:35.732658    4436 main.go:141] libmachine: STDOUT: 
	I1009 12:52:35.732673    4436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:35.732685    4436 client.go:171] duration metric: took 252.3065ms to LocalClient.Create
	I1009 12:52:37.734788    4436 start.go:128] duration metric: took 2.317749458s to createHost
	I1009 12:52:37.734888    4436 start.go:83] releasing machines lock for "auto-311000", held for 2.318299875s
	W1009 12:52:37.735246    4436 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:37.747857    4436 out.go:201] 
	W1009 12:52:37.751894    4436 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:52:37.751920    4436 out.go:270] * 
	* 
	W1009 12:52:37.754877    4436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:52:37.768868    4436 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.9171695s)

                                                
                                                
-- stdout --
	* [kindnet-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-311000" primary control-plane node in "kindnet-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:52:40.086037    4545 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:52:40.086203    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:40.086206    4545 out.go:358] Setting ErrFile to fd 2...
	I1009 12:52:40.086209    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:40.086349    4545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:52:40.087481    4545 out.go:352] Setting JSON to false
	I1009 12:52:40.105463    4545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4930,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:52:40.105534    4545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:52:40.111538    4545 out.go:177] * [kindnet-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:52:40.119473    4545 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:52:40.119505    4545 notify.go:220] Checking for updates...
	I1009 12:52:40.125403    4545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:52:40.128511    4545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:52:40.131496    4545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:52:40.134399    4545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:52:40.137449    4545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:52:40.140881    4545 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:40.140965    4545 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:40.141012    4545 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:52:40.145393    4545 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:52:40.152473    4545 start.go:297] selected driver: qemu2
	I1009 12:52:40.152482    4545 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:52:40.152490    4545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:52:40.155090    4545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:52:40.158359    4545 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:52:40.161558    4545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:52:40.161586    4545 cni.go:84] Creating CNI manager for "kindnet"
	I1009 12:52:40.161591    4545 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 12:52:40.161618    4545 start.go:340] cluster config:
	{Name:kindnet-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:52:40.166357    4545 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:52:40.174444    4545 out.go:177] * Starting "kindnet-311000" primary control-plane node in "kindnet-311000" cluster
	I1009 12:52:40.178481    4545 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:52:40.178498    4545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:52:40.178509    4545 cache.go:56] Caching tarball of preloaded images
	I1009 12:52:40.178598    4545 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:52:40.178611    4545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:52:40.178671    4545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kindnet-311000/config.json ...
	I1009 12:52:40.178683    4545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kindnet-311000/config.json: {Name:mkecba2418188ac973eca369d0cc01f29f47074a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:52:40.178922    4545 start.go:360] acquireMachinesLock for kindnet-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:40.178968    4545 start.go:364] duration metric: took 41.25µs to acquireMachinesLock for "kindnet-311000"
	I1009 12:52:40.178978    4545 start.go:93] Provisioning new machine with config: &{Name:kindnet-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:40.179014    4545 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:40.187450    4545 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:40.204620    4545 start.go:159] libmachine.API.Create for "kindnet-311000" (driver="qemu2")
	I1009 12:52:40.204659    4545 client.go:168] LocalClient.Create starting
	I1009 12:52:40.204724    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:40.204759    4545 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:40.204776    4545 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:40.204818    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:40.204846    4545 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:40.204857    4545 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:40.205235    4545 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:40.362482    4545 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:40.409904    4545 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:40.409909    4545 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:40.410101    4545 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:40.419840    4545 main.go:141] libmachine: STDOUT: 
	I1009 12:52:40.419858    4545 main.go:141] libmachine: STDERR: 
	I1009 12:52:40.419917    4545 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2 +20000M
	I1009 12:52:40.428272    4545 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:40.428287    4545 main.go:141] libmachine: STDERR: 
	I1009 12:52:40.428312    4545 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:40.428317    4545 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:40.428329    4545 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:40.428357    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:71:03:59:7c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:40.430157    4545 main.go:141] libmachine: STDOUT: 
	I1009 12:52:40.430170    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:40.430191    4545 client.go:171] duration metric: took 225.532625ms to LocalClient.Create
	I1009 12:52:42.432291    4545 start.go:128] duration metric: took 2.253333833s to createHost
	I1009 12:52:42.432340    4545 start.go:83] releasing machines lock for "kindnet-311000", held for 2.253436834s
	W1009 12:52:42.432410    4545 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:42.443402    4545 out.go:177] * Deleting "kindnet-311000" in qemu2 ...
	W1009 12:52:42.470562    4545 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:42.470584    4545 start.go:729] Will try again in 5 seconds ...
	I1009 12:52:47.472647    4545 start.go:360] acquireMachinesLock for kindnet-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:47.473124    4545 start.go:364] duration metric: took 379.458µs to acquireMachinesLock for "kindnet-311000"
	I1009 12:52:47.473215    4545 start.go:93] Provisioning new machine with config: &{Name:kindnet-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:47.473681    4545 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:47.487493    4545 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:47.536049    4545 start.go:159] libmachine.API.Create for "kindnet-311000" (driver="qemu2")
	I1009 12:52:47.536100    4545 client.go:168] LocalClient.Create starting
	I1009 12:52:47.536232    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:47.536330    4545 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:47.536357    4545 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:47.536426    4545 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:47.536489    4545 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:47.536505    4545 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:47.537117    4545 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:47.706491    4545 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:47.898350    4545 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:47.898357    4545 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:47.898579    4545 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:47.909011    4545 main.go:141] libmachine: STDOUT: 
	I1009 12:52:47.909040    4545 main.go:141] libmachine: STDERR: 
	I1009 12:52:47.909102    4545 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2 +20000M
	I1009 12:52:47.917583    4545 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:47.917602    4545 main.go:141] libmachine: STDERR: 
	I1009 12:52:47.917615    4545 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:47.917620    4545 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:47.917629    4545 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:47.917663    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:4a:41:47:bd:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kindnet-311000/disk.qcow2
	I1009 12:52:47.919517    4545 main.go:141] libmachine: STDOUT: 
	I1009 12:52:47.919536    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:47.919547    4545 client.go:171] duration metric: took 383.454791ms to LocalClient.Create
	I1009 12:52:49.921657    4545 start.go:128] duration metric: took 2.448026083s to createHost
	I1009 12:52:49.921701    4545 start.go:83] releasing machines lock for "kindnet-311000", held for 2.448633208s
	W1009 12:52:49.922074    4545 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:49.934709    4545 out.go:201] 
	W1009 12:52:49.939783    4545 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:52:49.939872    4545 out.go:270] * 
	* 
	W1009 12:52:49.942353    4545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:52:49.956660    4545 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.893032542s)

                                                
                                                
-- stdout --
	* [flannel-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-311000" primary control-plane node in "flannel-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:52:52.367501    4661 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:52:52.367661    4661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:52.367665    4661 out.go:358] Setting ErrFile to fd 2...
	I1009 12:52:52.367667    4661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:52:52.367802    4661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:52:52.368951    4661 out.go:352] Setting JSON to false
	I1009 12:52:52.386505    4661 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4942,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:52:52.386590    4661 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:52:52.391238    4661 out.go:177] * [flannel-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:52:52.399115    4661 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:52:52.399166    4661 notify.go:220] Checking for updates...
	I1009 12:52:52.410284    4661 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:52:52.413270    4661 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:52:52.416257    4661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:52:52.419247    4661 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:52:52.420793    4661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:52:52.424529    4661 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:52.424610    4661 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:52:52.424662    4661 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:52:52.429209    4661 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:52:52.435273    4661 start.go:297] selected driver: qemu2
	I1009 12:52:52.435281    4661 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:52:52.435290    4661 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:52:52.438016    4661 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:52:52.441235    4661 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:52:52.444315    4661 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:52:52.444338    4661 cni.go:84] Creating CNI manager for "flannel"
	I1009 12:52:52.444344    4661 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1009 12:52:52.444381    4661 start.go:340] cluster config:
	{Name:flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:52:52.449108    4661 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:52:52.458274    4661 out.go:177] * Starting "flannel-311000" primary control-plane node in "flannel-311000" cluster
	I1009 12:52:52.462170    4661 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:52:52.462190    4661 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:52:52.462200    4661 cache.go:56] Caching tarball of preloaded images
	I1009 12:52:52.462308    4661 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:52:52.462315    4661 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:52:52.462382    4661 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/flannel-311000/config.json ...
	I1009 12:52:52.462395    4661 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/flannel-311000/config.json: {Name:mkbee789ae246e82707eb3eace8bce248f131ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:52:52.462779    4661 start.go:360] acquireMachinesLock for flannel-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:52.462833    4661 start.go:364] duration metric: took 47.542µs to acquireMachinesLock for "flannel-311000"
	I1009 12:52:52.462844    4661 start.go:93] Provisioning new machine with config: &{Name:flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:52.462885    4661 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:52.471222    4661 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:52.488152    4661 start.go:159] libmachine.API.Create for "flannel-311000" (driver="qemu2")
	I1009 12:52:52.488178    4661 client.go:168] LocalClient.Create starting
	I1009 12:52:52.488262    4661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:52.488297    4661 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:52.488307    4661 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:52.488344    4661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:52.488373    4661 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:52.488380    4661 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:52.488818    4661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:52:52.646849    4661 main.go:141] libmachine: Creating SSH key...
	I1009 12:52:52.788838    4661 main.go:141] libmachine: Creating Disk image...
	I1009 12:52:52.788844    4661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:52:52.789062    4661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:52:52.799408    4661 main.go:141] libmachine: STDOUT: 
	I1009 12:52:52.799429    4661 main.go:141] libmachine: STDERR: 
	I1009 12:52:52.799499    4661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2 +20000M
	I1009 12:52:52.807976    4661 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:52:52.807995    4661 main.go:141] libmachine: STDERR: 
	I1009 12:52:52.808008    4661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:52:52.808012    4661 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:52:52.808023    4661 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:52:52.808050    4661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a9:47:db:5d:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:52:52.809910    4661 main.go:141] libmachine: STDOUT: 
	I1009 12:52:52.809923    4661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:52:52.809942    4661 client.go:171] duration metric: took 321.768292ms to LocalClient.Create
	I1009 12:52:54.812054    4661 start.go:128] duration metric: took 2.349225625s to createHost
	I1009 12:52:54.812105    4661 start.go:83] releasing machines lock for "flannel-311000", held for 2.349340333s
	W1009 12:52:54.812172    4661 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:54.824468    4661 out.go:177] * Deleting "flannel-311000" in qemu2 ...
	W1009 12:52:54.854819    4661 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:52:54.854855    4661 start.go:729] Will try again in 5 seconds ...
	I1009 12:52:59.856957    4661 start.go:360] acquireMachinesLock for flannel-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:52:59.857587    4661 start.go:364] duration metric: took 503.542µs to acquireMachinesLock for "flannel-311000"
	I1009 12:52:59.857754    4661 start.go:93] Provisioning new machine with config: &{Name:flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:52:59.858072    4661 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:52:59.871941    4661 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:52:59.920564    4661 start.go:159] libmachine.API.Create for "flannel-311000" (driver="qemu2")
	I1009 12:52:59.920614    4661 client.go:168] LocalClient.Create starting
	I1009 12:52:59.920744    4661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:52:59.920826    4661 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:59.920843    4661 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:59.920923    4661 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:52:59.920983    4661 main.go:141] libmachine: Decoding PEM data...
	I1009 12:52:59.921001    4661 main.go:141] libmachine: Parsing certificate...
	I1009 12:52:59.921736    4661 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:00.090450    4661 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:00.158636    4661 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:00.158642    4661 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:00.158847    4661 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:53:00.168722    4661 main.go:141] libmachine: STDOUT: 
	I1009 12:53:00.168740    4661 main.go:141] libmachine: STDERR: 
	I1009 12:53:00.168793    4661 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2 +20000M
	I1009 12:53:00.177207    4661 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:00.177227    4661 main.go:141] libmachine: STDERR: 
	I1009 12:53:00.177242    4661 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:53:00.177248    4661 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:00.177261    4661 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:00.177287    4661 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:26:df:78:18:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/flannel-311000/disk.qcow2
	I1009 12:53:00.179134    4661 main.go:141] libmachine: STDOUT: 
	I1009 12:53:00.179165    4661 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:00.179179    4661 client.go:171] duration metric: took 258.569417ms to LocalClient.Create
	I1009 12:53:02.181281    4661 start.go:128] duration metric: took 2.323251625s to createHost
	I1009 12:53:02.181339    4661 start.go:83] releasing machines lock for "flannel-311000", held for 2.323789333s
	W1009 12:53:02.181757    4661 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:02.194649    4661 out.go:201] 
	W1009 12:53:02.199491    4661 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:53:02.199534    4661 out.go:270] * 
	* 
	W1009 12:53:02.201804    4661 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:53:02.216333    4661 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.0892755s)

                                                
                                                
-- stdout --
	* [enable-default-cni-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-311000" primary control-plane node in "enable-default-cni-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:53:04.781478    4780 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:53:04.781634    4780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:04.781637    4780 out.go:358] Setting ErrFile to fd 2...
	I1009 12:53:04.781640    4780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:04.781766    4780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:53:04.782859    4780 out.go:352] Setting JSON to false
	I1009 12:53:04.800499    4780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4954,"bootTime":1728498630,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:53:04.800573    4780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:53:04.806999    4780 out.go:177] * [enable-default-cni-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:53:04.815810    4780 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:53:04.815847    4780 notify.go:220] Checking for updates...
	I1009 12:53:04.821776    4780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:53:04.824782    4780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:53:04.826189    4780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:53:04.833827    4780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:53:04.836831    4780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:53:04.840135    4780 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:04.840220    4780 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:04.840274    4780 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:53:04.843758    4780 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:53:04.850800    4780 start.go:297] selected driver: qemu2
	I1009 12:53:04.850809    4780 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:53:04.850817    4780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:53:04.853310    4780 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:53:04.856809    4780 out.go:177] * Automatically selected the socket_vmnet network
	E1009 12:53:04.859882    4780 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1009 12:53:04.859897    4780 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:53:04.859915    4780 cni.go:84] Creating CNI manager for "bridge"
	I1009 12:53:04.859920    4780 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:53:04.859953    4780 start.go:340] cluster config:
	{Name:enable-default-cni-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:53:04.864629    4780 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:53:04.872811    4780 out.go:177] * Starting "enable-default-cni-311000" primary control-plane node in "enable-default-cni-311000" cluster
	I1009 12:53:04.876717    4780 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:53:04.876731    4780 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:53:04.876741    4780 cache.go:56] Caching tarball of preloaded images
	I1009 12:53:04.876819    4780 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:53:04.876825    4780 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:53:04.876889    4780 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/enable-default-cni-311000/config.json ...
	I1009 12:53:04.876901    4780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/enable-default-cni-311000/config.json: {Name:mkb3cfcae35dc825dd7782c8d14d724bbc606c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:53:04.877306    4780 start.go:360] acquireMachinesLock for enable-default-cni-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:04.877358    4780 start.go:364] duration metric: took 44.583µs to acquireMachinesLock for "enable-default-cni-311000"
	I1009 12:53:04.877370    4780 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:04.877403    4780 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:04.881826    4780 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:04.900028    4780 start.go:159] libmachine.API.Create for "enable-default-cni-311000" (driver="qemu2")
	I1009 12:53:04.900063    4780 client.go:168] LocalClient.Create starting
	I1009 12:53:04.900138    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:04.900177    4780 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:04.900189    4780 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:04.900228    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:04.900259    4780 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:04.900266    4780 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:04.900708    4780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:05.059908    4780 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:05.395344    4780 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:05.395357    4780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:05.395630    4780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:05.406040    4780 main.go:141] libmachine: STDOUT: 
	I1009 12:53:05.406060    4780 main.go:141] libmachine: STDERR: 
	I1009 12:53:05.406121    4780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2 +20000M
	I1009 12:53:05.414697    4780 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:05.414710    4780 main.go:141] libmachine: STDERR: 
	I1009 12:53:05.414724    4780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:05.414730    4780 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:05.414741    4780 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:05.414778    4780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:76:f5:8f:c0:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:05.416553    4780 main.go:141] libmachine: STDOUT: 
	I1009 12:53:05.416567    4780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:05.416592    4780 client.go:171] duration metric: took 516.532875ms to LocalClient.Create
	I1009 12:53:07.418716    4780 start.go:128] duration metric: took 2.541375625s to createHost
	I1009 12:53:07.418774    4780 start.go:83] releasing machines lock for "enable-default-cni-311000", held for 2.541490208s
	W1009 12:53:07.418836    4780 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:07.431866    4780 out.go:177] * Deleting "enable-default-cni-311000" in qemu2 ...
	W1009 12:53:07.457620    4780 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:07.457648    4780 start.go:729] Will try again in 5 seconds ...
	I1009 12:53:12.459662    4780 start.go:360] acquireMachinesLock for enable-default-cni-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:12.460297    4780 start.go:364] duration metric: took 515.417µs to acquireMachinesLock for "enable-default-cni-311000"
	I1009 12:53:12.460427    4780 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:12.460661    4780 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:12.475360    4780 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:12.524528    4780 start.go:159] libmachine.API.Create for "enable-default-cni-311000" (driver="qemu2")
	I1009 12:53:12.524578    4780 client.go:168] LocalClient.Create starting
	I1009 12:53:12.524740    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:12.524829    4780 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:12.524849    4780 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:12.524927    4780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:12.524999    4780 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:12.525012    4780 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:12.525577    4780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:12.694729    4780 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:12.779270    4780 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:12.779278    4780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:12.779481    4780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:12.789270    4780 main.go:141] libmachine: STDOUT: 
	I1009 12:53:12.789286    4780 main.go:141] libmachine: STDERR: 
	I1009 12:53:12.789339    4780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2 +20000M
	I1009 12:53:12.797818    4780 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:12.797847    4780 main.go:141] libmachine: STDERR: 
	I1009 12:53:12.797860    4780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:12.797866    4780 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:12.797874    4780 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:12.797912    4780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5a:77:39:64:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/enable-default-cni-311000/disk.qcow2
	I1009 12:53:12.799776    4780 main.go:141] libmachine: STDOUT: 
	I1009 12:53:12.799788    4780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:12.799800    4780 client.go:171] duration metric: took 275.224959ms to LocalClient.Create
	I1009 12:53:14.800162    4780 start.go:128] duration metric: took 2.339516834s to createHost
	I1009 12:53:14.800250    4780 start.go:83] releasing machines lock for "enable-default-cni-311000", held for 2.340009042s
	W1009 12:53:14.800519    4780 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:14.809269    4780 out.go:201] 
	W1009 12:53:14.815336    4780 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:53:14.815361    4780 out.go:270] * 
	* 
	W1009 12:53:14.816888    4780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:53:14.826191    4780 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.0625805s)

                                                
                                                
-- stdout --
	* [bridge-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-311000" primary control-plane node in "bridge-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:53:17.186492    4892 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:53:17.186658    4892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:17.186662    4892 out.go:358] Setting ErrFile to fd 2...
	I1009 12:53:17.186664    4892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:17.186783    4892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:53:17.188252    4892 out.go:352] Setting JSON to false
	I1009 12:53:17.206169    4892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4967,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:53:17.206261    4892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:53:17.212841    4892 out.go:177] * [bridge-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:53:17.220825    4892 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:53:17.220873    4892 notify.go:220] Checking for updates...
	I1009 12:53:17.226807    4892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:53:17.229780    4892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:53:17.231101    4892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:53:17.233743    4892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:53:17.236801    4892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:53:17.240157    4892 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:17.240233    4892 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:17.240284    4892 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:53:17.244760    4892 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:53:17.251772    4892 start.go:297] selected driver: qemu2
	I1009 12:53:17.251778    4892 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:53:17.251784    4892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:53:17.254405    4892 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:53:17.257836    4892 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:53:17.260858    4892 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:53:17.260876    4892 cni.go:84] Creating CNI manager for "bridge"
	I1009 12:53:17.260880    4892 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:53:17.260923    4892 start.go:340] cluster config:
	{Name:bridge-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:53:17.265600    4892 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:53:17.273741    4892 out.go:177] * Starting "bridge-311000" primary control-plane node in "bridge-311000" cluster
	I1009 12:53:17.277781    4892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:53:17.277799    4892 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:53:17.277808    4892 cache.go:56] Caching tarball of preloaded images
	I1009 12:53:17.277883    4892 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:53:17.277888    4892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:53:17.277954    4892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/bridge-311000/config.json ...
	I1009 12:53:17.277965    4892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/bridge-311000/config.json: {Name:mkeb9f756b2ad8709c0110fc3a2b690e4511094f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:53:17.278337    4892 start.go:360] acquireMachinesLock for bridge-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:17.278384    4892 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "bridge-311000"
	I1009 12:53:17.278395    4892 start.go:93] Provisioning new machine with config: &{Name:bridge-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:17.278422    4892 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:17.282746    4892 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:17.299412    4892 start.go:159] libmachine.API.Create for "bridge-311000" (driver="qemu2")
	I1009 12:53:17.299433    4892 client.go:168] LocalClient.Create starting
	I1009 12:53:17.299562    4892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:17.299599    4892 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:17.299614    4892 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:17.299648    4892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:17.299677    4892 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:17.299689    4892 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:17.300128    4892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:17.458349    4892 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:17.707754    4892 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:17.707764    4892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:17.708041    4892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:17.718514    4892 main.go:141] libmachine: STDOUT: 
	I1009 12:53:17.718575    4892 main.go:141] libmachine: STDERR: 
	I1009 12:53:17.718632    4892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2 +20000M
	I1009 12:53:17.727139    4892 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:17.727154    4892 main.go:141] libmachine: STDERR: 
	I1009 12:53:17.727165    4892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:17.727169    4892 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:17.727181    4892 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:17.727217    4892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:de:9e:35:b1:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:17.729021    4892 main.go:141] libmachine: STDOUT: 
	I1009 12:53:17.729034    4892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:17.729054    4892 client.go:171] duration metric: took 429.630042ms to LocalClient.Create
	I1009 12:53:19.731160    4892 start.go:128] duration metric: took 2.452792833s to createHost
	I1009 12:53:19.731220    4892 start.go:83] releasing machines lock for "bridge-311000", held for 2.452908125s
	W1009 12:53:19.732373    4892 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:19.744600    4892 out.go:177] * Deleting "bridge-311000" in qemu2 ...
	W1009 12:53:19.775284    4892 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:19.775322    4892 start.go:729] Will try again in 5 seconds ...
	I1009 12:53:24.777398    4892 start.go:360] acquireMachinesLock for bridge-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:24.777933    4892 start.go:364] duration metric: took 433.209µs to acquireMachinesLock for "bridge-311000"
	I1009 12:53:24.778054    4892 start.go:93] Provisioning new machine with config: &{Name:bridge-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:24.778347    4892 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:24.784005    4892 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:24.832485    4892 start.go:159] libmachine.API.Create for "bridge-311000" (driver="qemu2")
	I1009 12:53:24.832536    4892 client.go:168] LocalClient.Create starting
	I1009 12:53:24.832684    4892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:24.832767    4892 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:24.832791    4892 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:24.832875    4892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:24.832936    4892 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:24.832952    4892 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:24.833885    4892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:25.005410    4892 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:25.147807    4892 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:25.147818    4892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:25.148025    4892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:25.157897    4892 main.go:141] libmachine: STDOUT: 
	I1009 12:53:25.157920    4892 main.go:141] libmachine: STDERR: 
	I1009 12:53:25.157977    4892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2 +20000M
	I1009 12:53:25.166364    4892 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:25.166379    4892 main.go:141] libmachine: STDERR: 
	I1009 12:53:25.166394    4892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:25.166401    4892 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:25.166410    4892 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:25.166444    4892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:83:bd:33:89:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/bridge-311000/disk.qcow2
	I1009 12:53:25.168212    4892 main.go:141] libmachine: STDOUT: 
	I1009 12:53:25.168225    4892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:25.168241    4892 client.go:171] duration metric: took 335.709583ms to LocalClient.Create
	I1009 12:53:27.170345    4892 start.go:128] duration metric: took 2.39205175s to createHost
	I1009 12:53:27.170402    4892 start.go:83] releasing machines lock for "bridge-311000", held for 2.392526166s
	W1009 12:53:27.170757    4892 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:27.182414    4892 out.go:201] 
	W1009 12:53:27.186635    4892 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:53:27.186665    4892 out.go:270] * 
	* 
	W1009 12:53:27.189529    4892 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:53:27.202448    4892 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.930191833s)

                                                
                                                
-- stdout --
	* [kubenet-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-311000" primary control-plane node in "kubenet-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:53:29.559522    5001 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:53:29.559690    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:29.559694    5001 out.go:358] Setting ErrFile to fd 2...
	I1009 12:53:29.559697    5001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:29.559810    5001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:53:29.560978    5001 out.go:352] Setting JSON to false
	I1009 12:53:29.578599    5001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4979,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:53:29.578689    5001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:53:29.584602    5001 out.go:177] * [kubenet-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:53:29.591599    5001 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:53:29.591637    5001 notify.go:220] Checking for updates...
	I1009 12:53:29.598588    5001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:53:29.601492    5001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:53:29.604588    5001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:53:29.607591    5001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:53:29.610581    5001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:53:29.613983    5001 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:29.614061    5001 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:29.614108    5001 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:53:29.618533    5001 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:53:29.625554    5001 start.go:297] selected driver: qemu2
	I1009 12:53:29.625561    5001 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:53:29.625568    5001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:53:29.628143    5001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:53:29.631597    5001 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:53:29.634669    5001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:53:29.634689    5001 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1009 12:53:29.634731    5001 start.go:340] cluster config:
	{Name:kubenet-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:53:29.639394    5001 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:53:29.647563    5001 out.go:177] * Starting "kubenet-311000" primary control-plane node in "kubenet-311000" cluster
	I1009 12:53:29.651569    5001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:53:29.651586    5001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:53:29.651599    5001 cache.go:56] Caching tarball of preloaded images
	I1009 12:53:29.651692    5001 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:53:29.651698    5001 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:53:29.651759    5001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kubenet-311000/config.json ...
	I1009 12:53:29.651771    5001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/kubenet-311000/config.json: {Name:mk2e209cc0d22ef9bf8e4dd1a6b52b38871ec79a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:53:29.652158    5001 start.go:360] acquireMachinesLock for kubenet-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:29.652218    5001 start.go:364] duration metric: took 53.959µs to acquireMachinesLock for "kubenet-311000"
	I1009 12:53:29.652229    5001 start.go:93] Provisioning new machine with config: &{Name:kubenet-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:29.652266    5001 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:29.656447    5001 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:29.673982    5001 start.go:159] libmachine.API.Create for "kubenet-311000" (driver="qemu2")
	I1009 12:53:29.674011    5001 client.go:168] LocalClient.Create starting
	I1009 12:53:29.674074    5001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:29.674115    5001 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:29.674127    5001 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:29.674172    5001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:29.674209    5001 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:29.674217    5001 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:29.674663    5001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:29.831526    5001 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:30.046982    5001 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:30.046992    5001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:30.047266    5001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:30.058046    5001 main.go:141] libmachine: STDOUT: 
	I1009 12:53:30.058071    5001 main.go:141] libmachine: STDERR: 
	I1009 12:53:30.058122    5001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2 +20000M
	I1009 12:53:30.066619    5001 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:30.066634    5001 main.go:141] libmachine: STDERR: 
	I1009 12:53:30.066657    5001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:30.066662    5001 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:30.066677    5001 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:30.066709    5001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:27:42:bb:80:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:30.068522    5001 main.go:141] libmachine: STDOUT: 
	I1009 12:53:30.068540    5001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:30.068562    5001 client.go:171] duration metric: took 394.557708ms to LocalClient.Create
	I1009 12:53:32.070669    5001 start.go:128] duration metric: took 2.418464625s to createHost
	I1009 12:53:32.070770    5001 start.go:83] releasing machines lock for "kubenet-311000", held for 2.418621916s
	W1009 12:53:32.070834    5001 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:32.081203    5001 out.go:177] * Deleting "kubenet-311000" in qemu2 ...
	W1009 12:53:32.108495    5001 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:32.108528    5001 start.go:729] Will try again in 5 seconds ...
	I1009 12:53:37.110581    5001 start.go:360] acquireMachinesLock for kubenet-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:37.111062    5001 start.go:364] duration metric: took 403.875µs to acquireMachinesLock for "kubenet-311000"
	I1009 12:53:37.111182    5001 start.go:93] Provisioning new machine with config: &{Name:kubenet-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:37.111564    5001 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:37.122118    5001 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:37.171405    5001 start.go:159] libmachine.API.Create for "kubenet-311000" (driver="qemu2")
	I1009 12:53:37.171448    5001 client.go:168] LocalClient.Create starting
	I1009 12:53:37.171582    5001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:37.171671    5001 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:37.171690    5001 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:37.171769    5001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:37.171829    5001 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:37.171841    5001 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:37.172540    5001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:37.341503    5001 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:37.386075    5001 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:37.386081    5001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:37.386287    5001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:37.396201    5001 main.go:141] libmachine: STDOUT: 
	I1009 12:53:37.396232    5001 main.go:141] libmachine: STDERR: 
	I1009 12:53:37.396297    5001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2 +20000M
	I1009 12:53:37.404806    5001 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:37.404827    5001 main.go:141] libmachine: STDERR: 
	I1009 12:53:37.404838    5001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:37.404843    5001 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:37.404850    5001 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:37.404898    5001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5b:7a:22:e7:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/kubenet-311000/disk.qcow2
	I1009 12:53:37.406734    5001 main.go:141] libmachine: STDOUT: 
	I1009 12:53:37.406749    5001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:37.406759    5001 client.go:171] duration metric: took 235.313333ms to LocalClient.Create
	I1009 12:53:39.408948    5001 start.go:128] duration metric: took 2.297348416s to createHost
	I1009 12:53:39.409017    5001 start.go:83] releasing machines lock for "kubenet-311000", held for 2.298008s
	W1009 12:53:39.409312    5001 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:39.424013    5001 out.go:201] 
	W1009 12:53:39.428054    5001 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:53:39.428078    5001 out.go:270] * 
	* 
	W1009 12:53:39.430687    5001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:53:39.444035    5001 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.960934042s)

                                                
                                                
-- stdout --
	* [custom-flannel-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-311000" primary control-plane node in "custom-flannel-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:53:41.819944    5110 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:53:41.820106    5110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:41.820112    5110 out.go:358] Setting ErrFile to fd 2...
	I1009 12:53:41.820115    5110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:41.820251    5110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:53:41.821439    5110 out.go:352] Setting JSON to false
	I1009 12:53:41.838974    5110 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4991,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:53:41.839049    5110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:53:41.844953    5110 out.go:177] * [custom-flannel-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:53:41.850695    5110 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:53:41.850736    5110 notify.go:220] Checking for updates...
	I1009 12:53:41.857012    5110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:53:41.858483    5110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:53:41.861998    5110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:53:41.865024    5110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:53:41.868019    5110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:53:41.871365    5110 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:41.871442    5110 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:41.871495    5110 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:53:41.875974    5110 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:53:41.883013    5110 start.go:297] selected driver: qemu2
	I1009 12:53:41.883021    5110 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:53:41.883029    5110 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:53:41.885489    5110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:53:41.888962    5110 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:53:41.892090    5110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:53:41.892116    5110 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1009 12:53:41.892136    5110 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1009 12:53:41.892178    5110 start.go:340] cluster config:
	{Name:custom-flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:53:41.896988    5110 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:53:41.905001    5110 out.go:177] * Starting "custom-flannel-311000" primary control-plane node in "custom-flannel-311000" cluster
	I1009 12:53:41.908974    5110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:53:41.908990    5110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:53:41.909001    5110 cache.go:56] Caching tarball of preloaded images
	I1009 12:53:41.909079    5110 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:53:41.909084    5110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:53:41.909160    5110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/custom-flannel-311000/config.json ...
	I1009 12:53:41.909171    5110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/custom-flannel-311000/config.json: {Name:mk86eac945a47193aa1568fc466f7c79ebdcd319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:53:41.909553    5110 start.go:360] acquireMachinesLock for custom-flannel-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:41.909608    5110 start.go:364] duration metric: took 44.417µs to acquireMachinesLock for "custom-flannel-311000"
	I1009 12:53:41.909619    5110 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:41.909645    5110 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:41.917860    5110 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:41.935153    5110 start.go:159] libmachine.API.Create for "custom-flannel-311000" (driver="qemu2")
	I1009 12:53:41.935183    5110 client.go:168] LocalClient.Create starting
	I1009 12:53:41.935254    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:41.935290    5110 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:41.935300    5110 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:41.935343    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:41.935373    5110 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:41.935380    5110 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:41.935845    5110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:42.094213    5110 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:42.327801    5110 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:42.327812    5110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:42.328099    5110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:42.338962    5110 main.go:141] libmachine: STDOUT: 
	I1009 12:53:42.338990    5110 main.go:141] libmachine: STDERR: 
	I1009 12:53:42.339066    5110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2 +20000M
	I1009 12:53:42.347542    5110 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:42.347558    5110 main.go:141] libmachine: STDERR: 
	I1009 12:53:42.347581    5110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:42.347586    5110 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:42.347597    5110 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:42.347626    5110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:25:eb:5f:a6:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:42.349417    5110 main.go:141] libmachine: STDOUT: 
	I1009 12:53:42.349431    5110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:42.349448    5110 client.go:171] duration metric: took 414.272792ms to LocalClient.Create
	I1009 12:53:44.351562    5110 start.go:128] duration metric: took 2.441976166s to createHost
	I1009 12:53:44.351631    5110 start.go:83] releasing machines lock for "custom-flannel-311000", held for 2.442094667s
	W1009 12:53:44.351704    5110 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:44.364846    5110 out.go:177] * Deleting "custom-flannel-311000" in qemu2 ...
	W1009 12:53:44.391193    5110 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:44.391221    5110 start.go:729] Will try again in 5 seconds ...
	I1009 12:53:49.393288    5110 start.go:360] acquireMachinesLock for custom-flannel-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:49.393833    5110 start.go:364] duration metric: took 443.167µs to acquireMachinesLock for "custom-flannel-311000"
	I1009 12:53:49.393949    5110 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:49.394262    5110 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:49.400011    5110 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:49.449649    5110 start.go:159] libmachine.API.Create for "custom-flannel-311000" (driver="qemu2")
	I1009 12:53:49.449700    5110 client.go:168] LocalClient.Create starting
	I1009 12:53:49.449836    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:49.449913    5110 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:49.449927    5110 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:49.449981    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:49.450036    5110 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:49.450050    5110 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:49.450743    5110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:49.623238    5110 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:49.682446    5110 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:49.682451    5110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:49.682640    5110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:49.692429    5110 main.go:141] libmachine: STDOUT: 
	I1009 12:53:49.692447    5110 main.go:141] libmachine: STDERR: 
	I1009 12:53:49.692494    5110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2 +20000M
	I1009 12:53:49.700848    5110 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:49.700861    5110 main.go:141] libmachine: STDERR: 
	I1009 12:53:49.700872    5110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:49.700890    5110 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:49.700899    5110 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:49.700924    5110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:1b:d6:47:84:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/custom-flannel-311000/disk.qcow2
	I1009 12:53:49.702710    5110 main.go:141] libmachine: STDOUT: 
	I1009 12:53:49.702723    5110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:49.702738    5110 client.go:171] duration metric: took 253.042ms to LocalClient.Create
	I1009 12:53:51.704846    5110 start.go:128] duration metric: took 2.310616375s to createHost
	I1009 12:53:51.704913    5110 start.go:83] releasing machines lock for "custom-flannel-311000", held for 2.31112675s
	W1009 12:53:51.705242    5110 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:51.716939    5110 out.go:201] 
	W1009 12:53:51.720982    5110 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:53:51.721024    5110 out.go:270] * 
	* 
	W1009 12:53:51.723660    5110 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:53:51.732885    5110 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.9491385s)

                                                
                                                
-- stdout --
	* [calico-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-311000" primary control-plane node in "calico-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:53:54.301063    5227 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:53:54.301200    5227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:54.301203    5227 out.go:358] Setting ErrFile to fd 2...
	I1009 12:53:54.301205    5227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:53:54.301335    5227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:53:54.302503    5227 out.go:352] Setting JSON to false
	I1009 12:53:54.320070    5227 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5004,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:53:54.320133    5227 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:53:54.326706    5227 out.go:177] * [calico-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:53:54.333629    5227 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:53:54.333695    5227 notify.go:220] Checking for updates...
	I1009 12:53:54.339551    5227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:53:54.342606    5227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:53:54.345633    5227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:53:54.348548    5227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:53:54.351616    5227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:53:54.355026    5227 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:54.355102    5227 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:53:54.355148    5227 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:53:54.359560    5227 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:53:54.366677    5227 start.go:297] selected driver: qemu2
	I1009 12:53:54.366686    5227 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:53:54.366694    5227 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:53:54.369272    5227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:53:54.372570    5227 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:53:54.375664    5227 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:53:54.375678    5227 cni.go:84] Creating CNI manager for "calico"
	I1009 12:53:54.375681    5227 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1009 12:53:54.375714    5227 start.go:340] cluster config:
	{Name:calico-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:53:54.380335    5227 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:53:54.388597    5227 out.go:177] * Starting "calico-311000" primary control-plane node in "calico-311000" cluster
	I1009 12:53:54.392593    5227 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:53:54.392609    5227 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:53:54.392618    5227 cache.go:56] Caching tarball of preloaded images
	I1009 12:53:54.392712    5227 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:53:54.392718    5227 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:53:54.392776    5227 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/calico-311000/config.json ...
	I1009 12:53:54.392788    5227 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/calico-311000/config.json: {Name:mk4d7cca2ec3500b3af15396d15972f978152ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:53:54.393187    5227 start.go:360] acquireMachinesLock for calico-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:53:54.393239    5227 start.go:364] duration metric: took 45µs to acquireMachinesLock for "calico-311000"
	I1009 12:53:54.393250    5227 start.go:93] Provisioning new machine with config: &{Name:calico-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:53:54.393276    5227 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:53:54.401577    5227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:53:54.419620    5227 start.go:159] libmachine.API.Create for "calico-311000" (driver="qemu2")
	I1009 12:53:54.419642    5227 client.go:168] LocalClient.Create starting
	I1009 12:53:54.419711    5227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:53:54.419748    5227 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:54.419758    5227 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:54.419794    5227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:53:54.419824    5227 main.go:141] libmachine: Decoding PEM data...
	I1009 12:53:54.419836    5227 main.go:141] libmachine: Parsing certificate...
	I1009 12:53:54.420344    5227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:53:54.581278    5227 main.go:141] libmachine: Creating SSH key...
	I1009 12:53:54.679847    5227 main.go:141] libmachine: Creating Disk image...
	I1009 12:53:54.679854    5227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:53:54.680059    5227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:53:54.690099    5227 main.go:141] libmachine: STDOUT: 
	I1009 12:53:54.690120    5227 main.go:141] libmachine: STDERR: 
	I1009 12:53:54.690185    5227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2 +20000M
	I1009 12:53:54.698726    5227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:53:54.698739    5227 main.go:141] libmachine: STDERR: 
	I1009 12:53:54.698754    5227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:53:54.698760    5227 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:53:54.698771    5227 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:53:54.698798    5227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:c4:ce:0b:15:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:53:54.700615    5227 main.go:141] libmachine: STDOUT: 
	I1009 12:53:54.700645    5227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:53:54.700664    5227 client.go:171] duration metric: took 281.025917ms to LocalClient.Create
	I1009 12:53:56.702773    5227 start.go:128] duration metric: took 2.309552583s to createHost
	I1009 12:53:56.702840    5227 start.go:83] releasing machines lock for "calico-311000", held for 2.309668834s
	W1009 12:53:56.702902    5227 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:56.717049    5227 out.go:177] * Deleting "calico-311000" in qemu2 ...
	W1009 12:53:56.742521    5227 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:53:56.742545    5227 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:01.744652    5227 start.go:360] acquireMachinesLock for calico-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:01.745199    5227 start.go:364] duration metric: took 430.875µs to acquireMachinesLock for "calico-311000"
	I1009 12:54:01.745390    5227 start.go:93] Provisioning new machine with config: &{Name:calico-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:01.745642    5227 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:01.759515    5227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:54:01.809018    5227 start.go:159] libmachine.API.Create for "calico-311000" (driver="qemu2")
	I1009 12:54:01.809077    5227 client.go:168] LocalClient.Create starting
	I1009 12:54:01.809224    5227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:01.809327    5227 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:01.809347    5227 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:01.809418    5227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:01.809475    5227 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:01.809487    5227 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:01.810344    5227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:01.979088    5227 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:02.152905    5227 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:02.152914    5227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:02.153149    5227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:54:02.163594    5227 main.go:141] libmachine: STDOUT: 
	I1009 12:54:02.163621    5227 main.go:141] libmachine: STDERR: 
	I1009 12:54:02.163681    5227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2 +20000M
	I1009 12:54:02.172143    5227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:02.172160    5227 main.go:141] libmachine: STDERR: 
	I1009 12:54:02.172179    5227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:54:02.172184    5227 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:02.172193    5227 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:02.172222    5227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:05:94:bf:25:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/calico-311000/disk.qcow2
	I1009 12:54:02.174061    5227 main.go:141] libmachine: STDOUT: 
	I1009 12:54:02.174077    5227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:02.174089    5227 client.go:171] duration metric: took 365.017583ms to LocalClient.Create
	I1009 12:54:04.176199    5227 start.go:128] duration metric: took 2.430606917s to createHost
	I1009 12:54:04.176257    5227 start.go:83] releasing machines lock for "calico-311000", held for 2.431115792s
	W1009 12:54:04.176552    5227 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:04.187242    5227 out.go:201] 
	W1009 12:54:04.192340    5227 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:04.192402    5227 out.go:270] * 
	* 
	W1009 12:54:04.194748    5227 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:04.205272    5227 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-311000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.884677041s)

                                                
                                                
-- stdout --
	* [false-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-311000" primary control-plane node in "false-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:06.763685    5347 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:06.763821    5347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:06.763825    5347 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:06.763827    5347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:06.763962    5347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:06.765115    5347 out.go:352] Setting JSON to false
	I1009 12:54:06.782711    5347 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5016,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:06.782776    5347 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:06.789093    5347 out.go:177] * [false-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:06.797045    5347 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:06.797127    5347 notify.go:220] Checking for updates...
	I1009 12:54:06.803055    5347 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:06.806027    5347 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:06.807445    5347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:06.811053    5347 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:06.814040    5347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:06.817476    5347 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:06.817556    5347 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:06.817606    5347 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:06.821952    5347 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:54:06.829083    5347 start.go:297] selected driver: qemu2
	I1009 12:54:06.829092    5347 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:54:06.829099    5347 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:06.831587    5347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:54:06.839034    5347 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:54:06.842132    5347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:06.842168    5347 cni.go:84] Creating CNI manager for "false"
	I1009 12:54:06.842203    5347 start.go:340] cluster config:
	{Name:false-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:06.846961    5347 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:06.854988    5347 out.go:177] * Starting "false-311000" primary control-plane node in "false-311000" cluster
	I1009 12:54:06.859045    5347 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:54:06.859058    5347 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:54:06.859076    5347 cache.go:56] Caching tarball of preloaded images
	I1009 12:54:06.859154    5347 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:54:06.859160    5347 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:54:06.859214    5347 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/false-311000/config.json ...
	I1009 12:54:06.859226    5347 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/false-311000/config.json: {Name:mk99048b7c221c843d1d26d8238fe8249d2539a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:54:06.859593    5347 start.go:360] acquireMachinesLock for false-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:06.859648    5347 start.go:364] duration metric: took 48.291µs to acquireMachinesLock for "false-311000"
	I1009 12:54:06.859660    5347 start.go:93] Provisioning new machine with config: &{Name:false-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:06.859694    5347 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:06.868021    5347 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:54:06.885881    5347 start.go:159] libmachine.API.Create for "false-311000" (driver="qemu2")
	I1009 12:54:06.885911    5347 client.go:168] LocalClient.Create starting
	I1009 12:54:06.885982    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:06.886022    5347 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:06.886035    5347 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:06.886074    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:06.886104    5347 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:06.886118    5347 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:06.886564    5347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:07.045103    5347 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:07.111993    5347 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:07.111999    5347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:07.112202    5347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:07.122121    5347 main.go:141] libmachine: STDOUT: 
	I1009 12:54:07.122148    5347 main.go:141] libmachine: STDERR: 
	I1009 12:54:07.122206    5347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2 +20000M
	I1009 12:54:07.130759    5347 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:07.130774    5347 main.go:141] libmachine: STDERR: 
	I1009 12:54:07.130789    5347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:07.130793    5347 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:07.130810    5347 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:07.130839    5347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:66:0f:09:16:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:07.132614    5347 main.go:141] libmachine: STDOUT: 
	I1009 12:54:07.132627    5347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:07.132646    5347 client.go:171] duration metric: took 246.736916ms to LocalClient.Create
	I1009 12:54:09.134757    5347 start.go:128] duration metric: took 2.275120042s to createHost
	I1009 12:54:09.134813    5347 start.go:83] releasing machines lock for "false-311000", held for 2.275231167s
	W1009 12:54:09.134874    5347 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:09.145158    5347 out.go:177] * Deleting "false-311000" in qemu2 ...
	W1009 12:54:09.172564    5347 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:09.172585    5347 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:14.174673    5347 start.go:360] acquireMachinesLock for false-311000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:14.175193    5347 start.go:364] duration metric: took 425.291µs to acquireMachinesLock for "false-311000"
	I1009 12:54:14.175305    5347 start.go:93] Provisioning new machine with config: &{Name:false-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:14.175588    5347 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:14.187310    5347 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 12:54:14.239312    5347 start.go:159] libmachine.API.Create for "false-311000" (driver="qemu2")
	I1009 12:54:14.239375    5347 client.go:168] LocalClient.Create starting
	I1009 12:54:14.239530    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:14.239614    5347 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:14.239632    5347 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:14.239696    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:14.239759    5347 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:14.239770    5347 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:14.240369    5347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:14.410474    5347 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:14.552424    5347 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:14.552432    5347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:14.552670    5347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:14.563034    5347 main.go:141] libmachine: STDOUT: 
	I1009 12:54:14.563051    5347 main.go:141] libmachine: STDERR: 
	I1009 12:54:14.563106    5347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2 +20000M
	I1009 12:54:14.571602    5347 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:14.571616    5347 main.go:141] libmachine: STDERR: 
	I1009 12:54:14.571629    5347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:14.571648    5347 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:14.571661    5347 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:14.571694    5347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:8c:5d:c8:6a:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/false-311000/disk.qcow2
	I1009 12:54:14.573502    5347 main.go:141] libmachine: STDOUT: 
	I1009 12:54:14.573516    5347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:14.573530    5347 client.go:171] duration metric: took 334.16125ms to LocalClient.Create
	I1009 12:54:16.575640    5347 start.go:128] duration metric: took 2.4000925s to createHost
	I1009 12:54:16.575709    5347 start.go:83] releasing machines lock for "false-311000", held for 2.400568791s
	W1009 12:54:16.576120    5347 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:16.585622    5347 out.go:201] 
	W1009 12:54:16.589924    5347 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:16.589948    5347 out.go:270] * 
	* 
	W1009 12:54:16.592799    5347 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:16.601770    5347 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.837246542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:18.950900    5456 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:18.951055    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:18.951058    5456 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:18.951061    5456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:18.951203    5456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:18.952385    5456 out.go:352] Setting JSON to false
	I1009 12:54:18.970085    5456 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5028,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:18.970152    5456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:18.976393    5456 out.go:177] * [old-k8s-version-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:18.983500    5456 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:18.983577    5456 notify.go:220] Checking for updates...
	I1009 12:54:18.990471    5456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:18.993457    5456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:18.996539    5456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:18.999501    5456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:19.002492    5456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:19.005906    5456 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:19.005977    5456 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:19.006034    5456 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:19.010474    5456 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:54:19.017387    5456 start.go:297] selected driver: qemu2
	I1009 12:54:19.017395    5456 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:54:19.017403    5456 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:19.019978    5456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:54:19.023471    5456 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:54:19.026574    5456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:19.026588    5456 cni.go:84] Creating CNI manager for ""
	I1009 12:54:19.026607    5456 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 12:54:19.026640    5456 start.go:340] cluster config:
	{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:19.031323    5456 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:19.039467    5456 out.go:177] * Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	I1009 12:54:19.042450    5456 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 12:54:19.042470    5456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 12:54:19.042480    5456 cache.go:56] Caching tarball of preloaded images
	I1009 12:54:19.042567    5456 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:54:19.042573    5456 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1009 12:54:19.042636    5456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/old-k8s-version-462000/config.json ...
	I1009 12:54:19.042646    5456 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/old-k8s-version-462000/config.json: {Name:mk9f6b24fd86f72104e8a9cd779cab56989cc282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:54:19.043013    5456 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:19.043061    5456 start.go:364] duration metric: took 41.333µs to acquireMachinesLock for "old-k8s-version-462000"
	I1009 12:54:19.043071    5456 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:19.043107    5456 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:19.050348    5456 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:54:19.067960    5456 start.go:159] libmachine.API.Create for "old-k8s-version-462000" (driver="qemu2")
	I1009 12:54:19.067995    5456 client.go:168] LocalClient.Create starting
	I1009 12:54:19.068063    5456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:19.068100    5456 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:19.068115    5456 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:19.068153    5456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:19.068183    5456 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:19.068189    5456 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:19.068593    5456 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:19.227454    5456 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:19.285727    5456 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:19.285733    5456 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:19.285931    5456 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:19.295793    5456 main.go:141] libmachine: STDOUT: 
	I1009 12:54:19.295809    5456 main.go:141] libmachine: STDERR: 
	I1009 12:54:19.295864    5456 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2 +20000M
	I1009 12:54:19.304408    5456 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:19.304423    5456 main.go:141] libmachine: STDERR: 
	I1009 12:54:19.304436    5456 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:19.304441    5456 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:19.304453    5456 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:19.304485    5456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:fa:8e:df:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:19.306285    5456 main.go:141] libmachine: STDOUT: 
	I1009 12:54:19.306297    5456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:19.306317    5456 client.go:171] duration metric: took 238.324333ms to LocalClient.Create
	I1009 12:54:21.308430    5456 start.go:128] duration metric: took 2.265377292s to createHost
	I1009 12:54:21.308488    5456 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 2.265493333s
	W1009 12:54:21.308596    5456 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:21.323856    5456 out.go:177] * Deleting "old-k8s-version-462000" in qemu2 ...
	W1009 12:54:21.350175    5456 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:21.350201    5456 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:26.352462    5456 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:26.352977    5456 start.go:364] duration metric: took 411.792µs to acquireMachinesLock for "old-k8s-version-462000"
	I1009 12:54:26.353109    5456 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:26.353403    5456 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:26.358334    5456 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:54:26.406483    5456 start.go:159] libmachine.API.Create for "old-k8s-version-462000" (driver="qemu2")
	I1009 12:54:26.406531    5456 client.go:168] LocalClient.Create starting
	I1009 12:54:26.406669    5456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:26.406742    5456 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:26.406759    5456 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:26.406824    5456 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:26.406884    5456 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:26.406895    5456 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:26.407494    5456 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:26.581486    5456 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:26.693420    5456 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:26.693426    5456 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:26.693615    5456 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:26.703585    5456 main.go:141] libmachine: STDOUT: 
	I1009 12:54:26.703604    5456 main.go:141] libmachine: STDERR: 
	I1009 12:54:26.703662    5456 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2 +20000M
	I1009 12:54:26.712078    5456 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:26.712094    5456 main.go:141] libmachine: STDERR: 
	I1009 12:54:26.712111    5456 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:26.712117    5456 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:26.712125    5456 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:26.712153    5456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1c:f0:65:06:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:26.713957    5456 main.go:141] libmachine: STDOUT: 
	I1009 12:54:26.713970    5456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:26.713983    5456 client.go:171] duration metric: took 307.457542ms to LocalClient.Create
	I1009 12:54:28.716087    5456 start.go:128] duration metric: took 2.362733584s to createHost
	I1009 12:54:28.716136    5456 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 2.363215083s
	W1009 12:54:28.716503    5456 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:28.728148    5456 out.go:201] 
	W1009 12:54:28.730222    5456 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:28.730259    5456 out.go:270] * 
	* 
	W1009 12:54:28.732618    5456 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:28.741225    5456 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (71.426875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml: exit status 1 (29.223334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-462000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (34.384209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (34.0215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-462000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system: exit status 1 (27.69125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-462000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (33.445417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.200985542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:31.098750    5498 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:31.098885    5498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:31.098889    5498 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:31.098891    5498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:31.099020    5498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:31.100052    5498 out.go:352] Setting JSON to false
	I1009 12:54:31.117711    5498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5041,"bootTime":1728498630,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:31.117788    5498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:31.122880    5498 out.go:177] * [old-k8s-version-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:31.129930    5498 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:31.129988    5498 notify.go:220] Checking for updates...
	I1009 12:54:31.136902    5498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:31.139900    5498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:31.142839    5498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:31.145913    5498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:31.148922    5498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:31.152218    5498 config.go:182] Loaded profile config "old-k8s-version-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1009 12:54:31.155825    5498 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 12:54:31.158878    5498 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:31.162812    5498 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:54:31.169808    5498 start.go:297] selected driver: qemu2
	I1009 12:54:31.169814    5498 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:31.169866    5498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:31.172430    5498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:31.172465    5498 cni.go:84] Creating CNI manager for ""
	I1009 12:54:31.172489    5498 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 12:54:31.172517    5498 start.go:340] cluster config:
	{Name:old-k8s-version-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-462000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:31.177062    5498 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:31.184862    5498 out.go:177] * Starting "old-k8s-version-462000" primary control-plane node in "old-k8s-version-462000" cluster
	I1009 12:54:31.188887    5498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 12:54:31.188903    5498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 12:54:31.188915    5498 cache.go:56] Caching tarball of preloaded images
	I1009 12:54:31.189007    5498 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:54:31.189013    5498 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1009 12:54:31.189072    5498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/old-k8s-version-462000/config.json ...
	I1009 12:54:31.189565    5498 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:31.189600    5498 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "old-k8s-version-462000"
	I1009 12:54:31.189614    5498 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:54:31.189618    5498 fix.go:54] fixHost starting: 
	I1009 12:54:31.189741    5498 fix.go:112] recreateIfNeeded on old-k8s-version-462000: state=Stopped err=<nil>
	W1009 12:54:31.189750    5498 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:54:31.193864    5498 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	I1009 12:54:31.201831    5498 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:31.201873    5498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1c:f0:65:06:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:31.204326    5498 main.go:141] libmachine: STDOUT: 
	I1009 12:54:31.204347    5498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:31.204379    5498 fix.go:56] duration metric: took 14.758917ms for fixHost
	I1009 12:54:31.204384    5498 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 14.779208ms
	W1009 12:54:31.204391    5498 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:31.204445    5498 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:31.204450    5498 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:36.206518    5498 start.go:360] acquireMachinesLock for old-k8s-version-462000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:36.206940    5498 start.go:364] duration metric: took 318.834µs to acquireMachinesLock for "old-k8s-version-462000"
	I1009 12:54:36.207063    5498 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:54:36.207081    5498 fix.go:54] fixHost starting: 
	I1009 12:54:36.207835    5498 fix.go:112] recreateIfNeeded on old-k8s-version-462000: state=Stopped err=<nil>
	W1009 12:54:36.207864    5498 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:54:36.216408    5498 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-462000" ...
	I1009 12:54:36.220397    5498 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:36.220658    5498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1c:f0:65:06:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/old-k8s-version-462000/disk.qcow2
	I1009 12:54:36.230651    5498 main.go:141] libmachine: STDOUT: 
	I1009 12:54:36.230707    5498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:36.230779    5498 fix.go:56] duration metric: took 23.695709ms for fixHost
	I1009 12:54:36.230794    5498 start.go:83] releasing machines lock for "old-k8s-version-462000", held for 23.830375ms
	W1009 12:54:36.230966    5498 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-462000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:36.238397    5498 out.go:201] 
	W1009 12:54:36.242451    5498 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:36.242474    5498 out.go:270] * 
	* 
	W1009 12:54:36.245336    5498 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:36.254377    5498 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-462000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (73.038833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-462000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (35.268083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-462000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.382916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-462000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-462000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (34.182542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-462000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (33.605084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1: exit status 83 (42.013542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-462000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:36.549254    5517 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:36.549678    5517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:36.549682    5517 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:36.549684    5517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:36.549852    5517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:36.550072    5517 out.go:352] Setting JSON to false
	I1009 12:54:36.550078    5517 mustload.go:65] Loading cluster: old-k8s-version-462000
	I1009 12:54:36.550294    5517 config.go:182] Loaded profile config "old-k8s-version-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1009 12:54:36.553272    5517 out.go:177] * The control-plane node old-k8s-version-462000 host is not running: state=Stopped
	I1009 12:54:36.556111    5517 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-462000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-462000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (33.308459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (34.173792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-462000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.983527083s)

                                                
                                                
-- stdout --
	* [no-preload-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-089000" primary control-plane node in "no-preload-089000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-089000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:36.881629    5534 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:36.881811    5534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:36.881815    5534 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:36.881817    5534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:36.881953    5534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:36.883149    5534 out.go:352] Setting JSON to false
	I1009 12:54:36.900607    5534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5046,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:36.900680    5534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:36.905214    5534 out.go:177] * [no-preload-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:36.912048    5534 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:36.912098    5534 notify.go:220] Checking for updates...
	I1009 12:54:36.918097    5534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:36.921052    5534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:36.924112    5534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:36.927134    5534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:36.930082    5534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:36.933508    5534 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:36.933571    5534 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:36.933625    5534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:36.938108    5534 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:54:36.945004    5534 start.go:297] selected driver: qemu2
	I1009 12:54:36.945011    5534 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:54:36.945016    5534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:36.947676    5534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:54:36.951086    5534 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:54:36.954082    5534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:36.954109    5534 cni.go:84] Creating CNI manager for ""
	I1009 12:54:36.954132    5534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:54:36.954137    5534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:54:36.954172    5534 start.go:340] cluster config:
	{Name:no-preload-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:36.958828    5534 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.967100    5534 out.go:177] * Starting "no-preload-089000" primary control-plane node in "no-preload-089000" cluster
	I1009 12:54:36.971065    5534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:54:36.971156    5534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/no-preload-089000/config.json ...
	I1009 12:54:36.971174    5534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/no-preload-089000/config.json: {Name:mk6b4a1189bbf79c0906ad8a94bfa405968037f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:54:36.971368    5534 cache.go:107] acquiring lock: {Name:mk399e65b8f2e7cc95ba894edd7eaeb9333dea44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971371    5534 cache.go:107] acquiring lock: {Name:mkdfdc418ae9138ad358d30bd2f5e149a56d3723 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971379    5534 cache.go:107] acquiring lock: {Name:mk1c538c5293d585292d318efaaf037236a579ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971416    5534 cache.go:107] acquiring lock: {Name:mk25e2e0eee4eb3d0e5a38063d8e8e0bca63e62c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971405    5534 cache.go:107] acquiring lock: {Name:mkfb7163ed16823aa3a0f70f48931b1a457308e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971414    5534 cache.go:107] acquiring lock: {Name:mke0ab083ffd9c196d82c77fb53bee0129912384 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971498    5534 cache.go:107] acquiring lock: {Name:mk0de16ad342f55bf4e6a7fc4b599fad204791a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971549    5534 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 12:54:36.971583    5534 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 12:54:36.971590    5534 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 12:54:36.971602    5534 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 427.791µs
	I1009 12:54:36.971445    5534 cache.go:107] acquiring lock: {Name:mk8a3f319e12e62cf335cb599a13d8bd3b6292a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:36.971609    5534 start.go:360] acquireMachinesLock for no-preload-089000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:36.971723    5534 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 12:54:36.971780    5534 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 12:54:36.971906    5534 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 12:54:36.971912    5534 start.go:364] duration metric: took 240.167µs to acquireMachinesLock for "no-preload-089000"
	I1009 12:54:36.971923    5534 start.go:93] Provisioning new machine with config: &{Name:no-preload-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:36.971981    5534 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:36.972034    5534 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 12:54:36.972010    5534 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 12:54:36.972013    5534 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 12:54:36.978064    5534 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:54:36.981806    5534 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 12:54:36.981893    5534 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 12:54:36.981984    5534 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 12:54:36.981998    5534 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 12:54:36.982014    5534 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 12:54:36.982363    5534 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 12:54:36.982402    5534 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 12:54:36.995489    5534 start.go:159] libmachine.API.Create for "no-preload-089000" (driver="qemu2")
	I1009 12:54:36.995510    5534 client.go:168] LocalClient.Create starting
	I1009 12:54:36.995589    5534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:36.995628    5534 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:36.995639    5534 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:36.995682    5534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:36.995714    5534 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:36.995720    5534 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:36.996103    5534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:37.158047    5534 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:37.313423    5534 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:37.313442    5534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:37.313693    5534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:37.323896    5534 main.go:141] libmachine: STDOUT: 
	I1009 12:54:37.323913    5534 main.go:141] libmachine: STDERR: 
	I1009 12:54:37.323971    5534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2 +20000M
	I1009 12:54:37.333694    5534 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:37.333715    5534 main.go:141] libmachine: STDERR: 
	I1009 12:54:37.333730    5534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:37.333736    5534 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:37.333757    5534 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:37.333783    5534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c7:b6:ee:0c:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:37.335962    5534 main.go:141] libmachine: STDOUT: 
	I1009 12:54:37.335987    5534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:37.336005    5534 client.go:171] duration metric: took 340.501125ms to LocalClient.Create
	I1009 12:54:37.448257    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 12:54:37.454445    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 12:54:37.459173    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1009 12:54:37.582354    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 12:54:37.599632    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1009 12:54:37.599654    5534 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 628.463209ms
	I1009 12:54:37.599664    5534 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1009 12:54:37.609243    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 12:54:37.666462    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1009 12:54:37.716643    5534 cache.go:162] opening:  /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 12:54:39.336199    5534 start.go:128] duration metric: took 2.364257958s to createHost
	I1009 12:54:39.336294    5534 start.go:83] releasing machines lock for "no-preload-089000", held for 2.364446875s
	W1009 12:54:39.336360    5534 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:39.347379    5534 out.go:177] * Deleting "no-preload-089000" in qemu2 ...
	W1009 12:54:39.380217    5534 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:39.380244    5534 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:41.157538    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1009 12:54:41.157602    5534 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.186490833s
	I1009 12:54:41.157635    5534 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1009 12:54:41.236782    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1009 12:54:41.236856    5534 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.2655s
	I1009 12:54:41.236884    5534 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1009 12:54:41.549809    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1009 12:54:41.549867    5534 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.578839042s
	I1009 12:54:41.549900    5534 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1009 12:54:41.833979    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1009 12:54:41.834047    5534 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.863026458s
	I1009 12:54:41.834080    5534 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1009 12:54:41.875735    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1009 12:54:41.875773    5534 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.904673375s
	I1009 12:54:41.875823    5534 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1009 12:54:44.380571    5534 start.go:360] acquireMachinesLock for no-preload-089000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:44.381076    5534 start.go:364] duration metric: took 406.458µs to acquireMachinesLock for "no-preload-089000"
	I1009 12:54:44.381190    5534 start.go:93] Provisioning new machine with config: &{Name:no-preload-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:44.381434    5534 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:44.395224    5534 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:54:44.444991    5534 start.go:159] libmachine.API.Create for "no-preload-089000" (driver="qemu2")
	I1009 12:54:44.445040    5534 client.go:168] LocalClient.Create starting
	I1009 12:54:44.445173    5534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:44.445264    5534 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:44.445284    5534 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:44.445368    5534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:44.445430    5534 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:44.445447    5534 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:44.445999    5534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:44.615418    5534 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:44.767828    5534 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:44.767835    5534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:44.768056    5534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:44.778348    5534 main.go:141] libmachine: STDOUT: 
	I1009 12:54:44.778365    5534 main.go:141] libmachine: STDERR: 
	I1009 12:54:44.778429    5534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2 +20000M
	I1009 12:54:44.787065    5534 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:44.787084    5534 main.go:141] libmachine: STDERR: 
	I1009 12:54:44.787103    5534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:44.787109    5534 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:44.787121    5534 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:44.787158    5534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8c:fe:21:2a:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:44.789116    5534 main.go:141] libmachine: STDOUT: 
	I1009 12:54:44.789145    5534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:44.789158    5534 client.go:171] duration metric: took 344.124625ms to LocalClient.Create
	I1009 12:54:46.228013    5534 cache.go:157] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1009 12:54:46.228087    5534 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.257130875s
	I1009 12:54:46.228116    5534 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1009 12:54:46.228162    5534 cache.go:87] Successfully saved all images to host disk.
	I1009 12:54:46.791302    5534 start.go:128] duration metric: took 2.409922083s to createHost
	I1009 12:54:46.791355    5534 start.go:83] releasing machines lock for "no-preload-089000", held for 2.410337s
	W1009 12:54:46.791617    5534 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:46.804142    5534 out.go:201] 
	W1009 12:54:46.809231    5534 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:46.809256    5534 out.go:270] * 
	* 
	W1009 12:54:46.811889    5534 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:46.820125    5534 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (75.574875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-089000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-089000 create -f testdata/busybox.yaml: exit status 1 (29.295584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-089000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (34.061542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (34.206041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-089000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-089000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-089000 describe deploy/metrics-server -n kube-system: exit status 1 (27.768834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-089000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (34.023583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.185623416s)

                                                
                                                
-- stdout --
	* [no-preload-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-089000" primary control-plane node in "no-preload-089000" cluster
	* Restarting existing qemu2 VM for "no-preload-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:50.787555    5610 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:50.787707    5610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:50.787710    5610 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:50.787712    5610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:50.787845    5610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:50.788907    5610 out.go:352] Setting JSON to false
	I1009 12:54:50.806645    5610 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5060,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:50.806741    5610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:50.811434    5610 out.go:177] * [no-preload-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:50.818477    5610 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:50.818522    5610 notify.go:220] Checking for updates...
	I1009 12:54:50.824439    5610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:50.827445    5610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:50.830400    5610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:50.833395    5610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:50.836414    5610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:50.839678    5610 config.go:182] Loaded profile config "no-preload-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:50.839930    5610 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:50.844373    5610 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:54:50.851429    5610 start.go:297] selected driver: qemu2
	I1009 12:54:50.851437    5610 start.go:901] validating driver "qemu2" against &{Name:no-preload-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:50.851520    5610 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:50.854139    5610 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:50.854167    5610 cni.go:84] Creating CNI manager for ""
	I1009 12:54:50.854191    5610 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:54:50.854210    5610 start.go:340] cluster config:
	{Name:no-preload-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-089000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:50.858629    5610 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.865320    5610 out.go:177] * Starting "no-preload-089000" primary control-plane node in "no-preload-089000" cluster
	I1009 12:54:50.869367    5610 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:54:50.869445    5610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/no-preload-089000/config.json ...
	I1009 12:54:50.869463    5610 cache.go:107] acquiring lock: {Name:mk8a3f319e12e62cf335cb599a13d8bd3b6292a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869463    5610 cache.go:107] acquiring lock: {Name:mk25e2e0eee4eb3d0e5a38063d8e8e0bca63e62c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869472    5610 cache.go:107] acquiring lock: {Name:mke0ab083ffd9c196d82c77fb53bee0129912384 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869494    5610 cache.go:107] acquiring lock: {Name:mkfb7163ed16823aa3a0f70f48931b1a457308e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869504    5610 cache.go:107] acquiring lock: {Name:mk0de16ad342f55bf4e6a7fc4b599fad204791a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869567    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 12:54:50.869576    5610 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120µs
	I1009 12:54:50.869582    5610 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 12:54:50.869585    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1009 12:54:50.869594    5610 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 90.5µs
	I1009 12:54:50.869598    5610 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1009 12:54:50.869595    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1009 12:54:50.869603    5610 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 110.083µs
	I1009 12:54:50.869606    5610 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1009 12:54:50.869610    5610 cache.go:107] acquiring lock: {Name:mkdfdc418ae9138ad358d30bd2f5e149a56d3723 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869626    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1009 12:54:50.869611    5610 cache.go:107] acquiring lock: {Name:mk1c538c5293d585292d318efaaf037236a579ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869662    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1009 12:54:50.869667    5610 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 66.792µs
	I1009 12:54:50.869671    5610 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1009 12:54:50.869630    5610 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 173.875µs
	I1009 12:54:50.869703    5610 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1009 12:54:50.869679    5610 cache.go:107] acquiring lock: {Name:mk399e65b8f2e7cc95ba894edd7eaeb9333dea44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:50.869763    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1009 12:54:50.869769    5610 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 158.166µs
	I1009 12:54:50.869772    5610 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1009 12:54:50.869787    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1009 12:54:50.869791    5610 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 131.375µs
	I1009 12:54:50.869795    5610 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1009 12:54:50.869843    5610 cache.go:115] /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1009 12:54:50.869847    5610 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 385.667µs
	I1009 12:54:50.869850    5610 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1009 12:54:50.869854    5610 cache.go:87] Successfully saved all images to host disk.
	I1009 12:54:50.869892    5610 start.go:360] acquireMachinesLock for no-preload-089000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:50.869925    5610 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "no-preload-089000"
	I1009 12:54:50.869934    5610 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:54:50.869938    5610 fix.go:54] fixHost starting: 
	I1009 12:54:50.870063    5610 fix.go:112] recreateIfNeeded on no-preload-089000: state=Stopped err=<nil>
	W1009 12:54:50.870070    5610 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:54:50.878482    5610 out.go:177] * Restarting existing qemu2 VM for "no-preload-089000" ...
	I1009 12:54:50.882417    5610 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:50.882458    5610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8c:fe:21:2a:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:50.884647    5610 main.go:141] libmachine: STDOUT: 
	I1009 12:54:50.884669    5610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:50.884694    5610 fix.go:56] duration metric: took 14.755584ms for fixHost
	I1009 12:54:50.884698    5610 start.go:83] releasing machines lock for "no-preload-089000", held for 14.768334ms
	W1009 12:54:50.884707    5610 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:50.884743    5610 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:50.884749    5610 start.go:729] Will try again in 5 seconds ...
	I1009 12:54:55.886828    5610 start.go:360] acquireMachinesLock for no-preload-089000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:55.887301    5610 start.go:364] duration metric: took 352.416µs to acquireMachinesLock for "no-preload-089000"
	I1009 12:54:55.887445    5610 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:54:55.887467    5610 fix.go:54] fixHost starting: 
	I1009 12:54:55.888222    5610 fix.go:112] recreateIfNeeded on no-preload-089000: state=Stopped err=<nil>
	W1009 12:54:55.888248    5610 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:54:55.892001    5610 out.go:177] * Restarting existing qemu2 VM for "no-preload-089000" ...
	I1009 12:54:55.897835    5610 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:55.898068    5610 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:8c:fe:21:2a:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/no-preload-089000/disk.qcow2
	I1009 12:54:55.908482    5610 main.go:141] libmachine: STDOUT: 
	I1009 12:54:55.908551    5610 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:55.908634    5610 fix.go:56] duration metric: took 21.171083ms for fixHost
	I1009 12:54:55.908650    5610 start.go:83] releasing machines lock for "no-preload-089000", held for 21.326417ms
	W1009 12:54:55.908798    5610 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:55.915729    5610 out.go:201] 
	W1009 12:54:55.918945    5610 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:54:55.918967    5610 out.go:270] * 
	* 
	W1009 12:54:55.921663    5610 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:54:55.932690    5610 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-089000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (74.875833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-089000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (35.330291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-089000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.325708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (34.032667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-089000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (33.4745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-089000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-089000 --alsologtostderr -v=1: exit status 83 (44.992917ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-089000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-089000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:56.225075    5632 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:56.225249    5632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:56.225253    5632 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:56.225255    5632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:56.225371    5632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:56.225588    5632 out.go:352] Setting JSON to false
	I1009 12:54:56.225595    5632 mustload.go:65] Loading cluster: no-preload-089000
	I1009 12:54:56.225816    5632 config.go:182] Loaded profile config "no-preload-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:56.230332    5632 out.go:177] * The control-plane node no-preload-089000 host is not running: state=Stopped
	I1009 12:54:56.233400    5632 out.go:177]   To start a cluster, run: "minikube start -p no-preload-089000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-089000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (33.107042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (33.704208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.826734708s)

                                                
                                                
-- stdout --
	* [embed-certs-266000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-266000" primary control-plane node in "embed-certs-266000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-266000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:54:56.569255    5649 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:54:56.569419    5649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:56.569422    5649 out.go:358] Setting ErrFile to fd 2...
	I1009 12:54:56.569425    5649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:54:56.569562    5649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:54:56.570744    5649 out.go:352] Setting JSON to false
	I1009 12:54:56.588332    5649 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5066,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:54:56.588412    5649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:54:56.597328    5649 out.go:177] * [embed-certs-266000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:54:56.601210    5649 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:54:56.601278    5649 notify.go:220] Checking for updates...
	I1009 12:54:56.608358    5649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:54:56.611314    5649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:54:56.614358    5649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:54:56.617349    5649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:54:56.618771    5649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:54:56.622694    5649 config.go:182] Loaded profile config "cert-expiration-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:56.622759    5649 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:54:56.622802    5649 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:54:56.630310    5649 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:54:56.637322    5649 start.go:297] selected driver: qemu2
	I1009 12:54:56.637333    5649 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:54:56.637341    5649 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:54:56.639876    5649 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:54:56.643418    5649 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:54:56.646323    5649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:54:56.646341    5649 cni.go:84] Creating CNI manager for ""
	I1009 12:54:56.646362    5649 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:54:56.646366    5649 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:54:56.646418    5649 start.go:340] cluster config:
	{Name:embed-certs-266000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:54:56.651050    5649 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:54:56.659329    5649 out.go:177] * Starting "embed-certs-266000" primary control-plane node in "embed-certs-266000" cluster
	I1009 12:54:56.663316    5649 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:54:56.663342    5649 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:54:56.663351    5649 cache.go:56] Caching tarball of preloaded images
	I1009 12:54:56.663452    5649 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:54:56.663461    5649 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:54:56.663527    5649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/embed-certs-266000/config.json ...
	I1009 12:54:56.663538    5649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/embed-certs-266000/config.json: {Name:mk4f2f5f121215f0804f9d4c42e393fa73d267f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:54:56.663882    5649 start.go:360] acquireMachinesLock for embed-certs-266000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:54:56.663933    5649 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "embed-certs-266000"
	I1009 12:54:56.663945    5649 start.go:93] Provisioning new machine with config: &{Name:embed-certs-266000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:54:56.663983    5649 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:54:56.670294    5649 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:54:56.687355    5649 start.go:159] libmachine.API.Create for "embed-certs-266000" (driver="qemu2")
	I1009 12:54:56.687382    5649 client.go:168] LocalClient.Create starting
	I1009 12:54:56.687454    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:54:56.687496    5649 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:56.687512    5649 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:56.687560    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:54:56.687590    5649 main.go:141] libmachine: Decoding PEM data...
	I1009 12:54:56.687601    5649 main.go:141] libmachine: Parsing certificate...
	I1009 12:54:56.687988    5649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:54:56.847043    5649 main.go:141] libmachine: Creating SSH key...
	I1009 12:54:56.895405    5649 main.go:141] libmachine: Creating Disk image...
	I1009 12:54:56.895416    5649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:54:56.895624    5649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:54:56.905302    5649 main.go:141] libmachine: STDOUT: 
	I1009 12:54:56.905319    5649 main.go:141] libmachine: STDERR: 
	I1009 12:54:56.905379    5649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2 +20000M
	I1009 12:54:56.913863    5649 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:54:56.913885    5649 main.go:141] libmachine: STDERR: 
	I1009 12:54:56.913896    5649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:54:56.913901    5649 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:54:56.913911    5649 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:54:56.913942    5649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:07:ef:3d:53:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:54:56.915772    5649 main.go:141] libmachine: STDOUT: 
	I1009 12:54:56.915784    5649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:54:56.915806    5649 client.go:171] duration metric: took 228.425416ms to LocalClient.Create
	I1009 12:54:58.917917    5649 start.go:128] duration metric: took 2.253987125s to createHost
	I1009 12:54:58.917983    5649 start.go:83] releasing machines lock for "embed-certs-266000", held for 2.25411525s
	W1009 12:54:58.918045    5649 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:58.928276    5649 out.go:177] * Deleting "embed-certs-266000" in qemu2 ...
	W1009 12:54:58.955972    5649 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:54:58.956016    5649 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:03.958078    5649 start.go:360] acquireMachinesLock for embed-certs-266000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:03.958673    5649 start.go:364] duration metric: took 492.833µs to acquireMachinesLock for "embed-certs-266000"
	I1009 12:55:03.958801    5649 start.go:93] Provisioning new machine with config: &{Name:embed-certs-266000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:55:03.959052    5649 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:55:03.973542    5649 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:55:04.021982    5649 start.go:159] libmachine.API.Create for "embed-certs-266000" (driver="qemu2")
	I1009 12:55:04.022027    5649 client.go:168] LocalClient.Create starting
	I1009 12:55:04.022171    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:55:04.022257    5649 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:04.022300    5649 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:04.022364    5649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:55:04.022429    5649 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:04.022448    5649 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:04.023059    5649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:55:04.192244    5649 main.go:141] libmachine: Creating SSH key...
	I1009 12:55:04.299798    5649 main.go:141] libmachine: Creating Disk image...
	I1009 12:55:04.299809    5649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:55:04.300002    5649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:55:04.309861    5649 main.go:141] libmachine: STDOUT: 
	I1009 12:55:04.309883    5649 main.go:141] libmachine: STDERR: 
	I1009 12:55:04.309944    5649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2 +20000M
	I1009 12:55:04.318535    5649 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:55:04.318551    5649 main.go:141] libmachine: STDERR: 
	I1009 12:55:04.318563    5649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:55:04.318568    5649 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:55:04.318577    5649 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:04.318617    5649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:8b:c3:de:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:55:04.320460    5649 main.go:141] libmachine: STDOUT: 
	I1009 12:55:04.320476    5649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:04.320489    5649 client.go:171] duration metric: took 298.467208ms to LocalClient.Create
	I1009 12:55:06.322585    5649 start.go:128] duration metric: took 2.363572208s to createHost
	I1009 12:55:06.322658    5649 start.go:83] releasing machines lock for "embed-certs-266000", held for 2.36403825s
	W1009 12:55:06.323035    5649 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:06.336746    5649 out.go:201] 
	W1009 12:55:06.339772    5649 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:06.339798    5649 out.go:270] * 
	* 
	W1009 12:55:06.342872    5649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:06.351730    5649 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (73.268541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-266000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-266000 create -f testdata/busybox.yaml: exit status 1 (29.481583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-266000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.541084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.723459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-266000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-266000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-266000 describe deploy/metrics-server -n kube-system: exit status 1 (27.987041ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-266000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.9345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.930468959s)

                                                
                                                
-- stdout --
	* [embed-certs-266000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-266000" primary control-plane node in "embed-certs-266000" cluster
	* Restarting existing qemu2 VM for "embed-certs-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:08.670120    5698 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:08.670285    5698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:08.670289    5698 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:08.670291    5698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:08.670430    5698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:08.671485    5698 out.go:352] Setting JSON to false
	I1009 12:55:08.688973    5698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5078,"bootTime":1728498630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:55:08.689048    5698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:55:08.692821    5698 out.go:177] * [embed-certs-266000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:55:08.699714    5698 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:55:08.699776    5698 notify.go:220] Checking for updates...
	I1009 12:55:08.706714    5698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:55:08.709680    5698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:55:08.712714    5698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:55:08.715571    5698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:55:08.718697    5698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:55:08.722055    5698 config.go:182] Loaded profile config "embed-certs-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:08.722323    5698 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:55:08.725585    5698 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:55:08.732613    5698 start.go:297] selected driver: qemu2
	I1009 12:55:08.732620    5698 start.go:901] validating driver "qemu2" against &{Name:embed-certs-266000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:08.732667    5698 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:55:08.735122    5698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:55:08.735146    5698 cni.go:84] Creating CNI manager for ""
	I1009 12:55:08.735175    5698 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:55:08.735202    5698 start.go:340] cluster config:
	{Name:embed-certs-266000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-266000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:08.739506    5698 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:55:08.747655    5698 out.go:177] * Starting "embed-certs-266000" primary control-plane node in "embed-certs-266000" cluster
	I1009 12:55:08.751634    5698 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:55:08.751648    5698 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:55:08.751659    5698 cache.go:56] Caching tarball of preloaded images
	I1009 12:55:08.751714    5698 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:55:08.751720    5698 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:55:08.751776    5698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/embed-certs-266000/config.json ...
	I1009 12:55:08.752257    5698 start.go:360] acquireMachinesLock for embed-certs-266000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:08.752288    5698 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "embed-certs-266000"
	I1009 12:55:08.752296    5698 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:08.752301    5698 fix.go:54] fixHost starting: 
	I1009 12:55:08.752419    5698 fix.go:112] recreateIfNeeded on embed-certs-266000: state=Stopped err=<nil>
	W1009 12:55:08.752426    5698 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:08.760675    5698 out.go:177] * Restarting existing qemu2 VM for "embed-certs-266000" ...
	I1009 12:55:08.764597    5698 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:08.764649    5698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:8b:c3:de:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:55:08.766869    5698 main.go:141] libmachine: STDOUT: 
	I1009 12:55:08.766890    5698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:08.766916    5698 fix.go:56] duration metric: took 14.613833ms for fixHost
	I1009 12:55:08.766922    5698 start.go:83] releasing machines lock for "embed-certs-266000", held for 14.630208ms
	W1009 12:55:08.766928    5698 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:08.766972    5698 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:08.766977    5698 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:13.769056    5698 start.go:360] acquireMachinesLock for embed-certs-266000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:15.499961    5698 start.go:364] duration metric: took 1.730826708s to acquireMachinesLock for "embed-certs-266000"
	I1009 12:55:15.500101    5698 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:15.500123    5698 fix.go:54] fixHost starting: 
	I1009 12:55:15.500935    5698 fix.go:112] recreateIfNeeded on embed-certs-266000: state=Stopped err=<nil>
	W1009 12:55:15.500964    5698 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:15.509659    5698 out.go:177] * Restarting existing qemu2 VM for "embed-certs-266000" ...
	I1009 12:55:15.520742    5698 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:15.521056    5698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:49:8b:c3:de:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/embed-certs-266000/disk.qcow2
	I1009 12:55:15.532821    5698 main.go:141] libmachine: STDOUT: 
	I1009 12:55:15.532895    5698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:15.533016    5698 fix.go:56] duration metric: took 32.893708ms for fixHost
	I1009 12:55:15.533044    5698 start.go:83] releasing machines lock for "embed-certs-266000", held for 33.023584ms
	W1009 12:55:15.533241    5698 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:15.541626    5698 out.go:201] 
	W1009 12:55:15.544756    5698 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:15.544779    5698 out.go:270] * 
	* 
	W1009 12:55:15.546751    5698 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:15.555613    5698 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-266000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (67.914792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.821082417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-367000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-367000" primary control-plane node in "default-k8s-diff-port-367000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-367000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:13.123817    5720 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:13.123972    5720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:13.123975    5720 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:13.123978    5720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:13.124089    5720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:13.125243    5720 out.go:352] Setting JSON to false
	I1009 12:55:13.142808    5720 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5083,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:55:13.142873    5720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:55:13.146414    5720 out.go:177] * [default-k8s-diff-port-367000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:55:13.153429    5720 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:55:13.153501    5720 notify.go:220] Checking for updates...
	I1009 12:55:13.160430    5720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:55:13.163368    5720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:55:13.166277    5720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:55:13.169393    5720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:55:13.172374    5720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:55:13.175624    5720 config.go:182] Loaded profile config "embed-certs-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:13.175683    5720 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:13.175747    5720 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:55:13.180326    5720 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:55:13.186353    5720 start.go:297] selected driver: qemu2
	I1009 12:55:13.186361    5720 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:55:13.186368    5720 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:55:13.188822    5720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 12:55:13.192312    5720 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:55:13.195462    5720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:55:13.195485    5720 cni.go:84] Creating CNI manager for ""
	I1009 12:55:13.195515    5720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:55:13.195522    5720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:55:13.195566    5720 start.go:340] cluster config:
	{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:13.200211    5720 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:55:13.208341    5720 out.go:177] * Starting "default-k8s-diff-port-367000" primary control-plane node in "default-k8s-diff-port-367000" cluster
	I1009 12:55:13.212337    5720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:55:13.212354    5720 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:55:13.212366    5720 cache.go:56] Caching tarball of preloaded images
	I1009 12:55:13.212459    5720 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:55:13.212465    5720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:55:13.212519    5720 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/default-k8s-diff-port-367000/config.json ...
	I1009 12:55:13.212536    5720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/default-k8s-diff-port-367000/config.json: {Name:mk1c78d30828d4899f3918fb0eb6b2433fc3d782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:55:13.212904    5720 start.go:360] acquireMachinesLock for default-k8s-diff-port-367000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:13.212953    5720 start.go:364] duration metric: took 41.833µs to acquireMachinesLock for "default-k8s-diff-port-367000"
	I1009 12:55:13.212964    5720 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:55:13.212996    5720 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:55:13.217369    5720 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:55:13.234465    5720 start.go:159] libmachine.API.Create for "default-k8s-diff-port-367000" (driver="qemu2")
	I1009 12:55:13.234492    5720 client.go:168] LocalClient.Create starting
	I1009 12:55:13.234557    5720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:55:13.234595    5720 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:13.234611    5720 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:13.234651    5720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:55:13.234680    5720 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:13.234686    5720 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:13.235096    5720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:55:13.390828    5720 main.go:141] libmachine: Creating SSH key...
	I1009 12:55:13.476762    5720 main.go:141] libmachine: Creating Disk image...
	I1009 12:55:13.476768    5720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:55:13.476980    5720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:13.487004    5720 main.go:141] libmachine: STDOUT: 
	I1009 12:55:13.487025    5720 main.go:141] libmachine: STDERR: 
	I1009 12:55:13.487079    5720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2 +20000M
	I1009 12:55:13.495688    5720 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:55:13.495703    5720 main.go:141] libmachine: STDERR: 
	I1009 12:55:13.495717    5720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:13.495722    5720 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:55:13.495735    5720 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:13.495766    5720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:98:09:95:cd:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:13.497605    5720 main.go:141] libmachine: STDOUT: 
	I1009 12:55:13.497619    5720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:13.497638    5720 client.go:171] duration metric: took 263.149041ms to LocalClient.Create
	I1009 12:55:15.499743    5720 start.go:128] duration metric: took 2.286801708s to createHost
	I1009 12:55:15.499805    5720 start.go:83] releasing machines lock for "default-k8s-diff-port-367000", held for 2.286909209s
	W1009 12:55:15.499897    5720 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:15.516604    5720 out.go:177] * Deleting "default-k8s-diff-port-367000" in qemu2 ...
	W1009 12:55:15.566591    5720 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:15.566636    5720 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:20.568798    5720 start.go:360] acquireMachinesLock for default-k8s-diff-port-367000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:20.569328    5720 start.go:364] duration metric: took 411.458µs to acquireMachinesLock for "default-k8s-diff-port-367000"
	I1009 12:55:20.569467    5720 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:55:20.569790    5720 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:55:20.580418    5720 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:55:20.629656    5720 start.go:159] libmachine.API.Create for "default-k8s-diff-port-367000" (driver="qemu2")
	I1009 12:55:20.629710    5720 client.go:168] LocalClient.Create starting
	I1009 12:55:20.629857    5720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:55:20.629932    5720 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:20.629951    5720 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:20.630051    5720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:55:20.630107    5720 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:20.630121    5720 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:20.630966    5720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:55:20.801447    5720 main.go:141] libmachine: Creating SSH key...
	I1009 12:55:20.848084    5720 main.go:141] libmachine: Creating Disk image...
	I1009 12:55:20.848092    5720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:55:20.848288    5720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:20.858125    5720 main.go:141] libmachine: STDOUT: 
	I1009 12:55:20.858142    5720 main.go:141] libmachine: STDERR: 
	I1009 12:55:20.858194    5720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2 +20000M
	I1009 12:55:20.866792    5720 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:55:20.866811    5720 main.go:141] libmachine: STDERR: 
	I1009 12:55:20.866825    5720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:20.866830    5720 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:55:20.866854    5720 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:20.866885    5720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:09:2c:06:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:20.868799    5720 main.go:141] libmachine: STDOUT: 
	I1009 12:55:20.868816    5720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:20.868829    5720 client.go:171] duration metric: took 239.120709ms to LocalClient.Create
	I1009 12:55:22.870946    5720 start.go:128] duration metric: took 2.301205416s to createHost
	I1009 12:55:22.870991    5720 start.go:83] releasing machines lock for "default-k8s-diff-port-367000", held for 2.301716417s
	W1009 12:55:22.871350    5720 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:22.882012    5720 out.go:201] 
	W1009 12:55:22.888018    5720 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:22.888045    5720 out.go:270] * 
	* 
	W1009 12:55:22.890726    5720 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:22.899978    5720 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (71.037375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-266000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (34.672417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-266000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.229292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-266000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-266000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (34.039625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-266000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.695541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-266000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-266000 --alsologtostderr -v=1: exit status 83 (47.803291ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-266000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-266000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:15.841781    5742 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:15.842000    5742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:15.842003    5742 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:15.842006    5742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:15.842111    5742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:15.842331    5742 out.go:352] Setting JSON to false
	I1009 12:55:15.842338    5742 mustload.go:65] Loading cluster: embed-certs-266000
	I1009 12:55:15.842552    5742 config.go:182] Loaded profile config "embed-certs-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:15.846981    5742 out.go:177] * The control-plane node embed-certs-266000 host is not running: state=Stopped
	I1009 12:55:15.852972    5742 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-266000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-266000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.598125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (33.348208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.997698083s)

                                                
                                                
-- stdout --
	* [newest-cni-851000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-851000" primary control-plane node in "newest-cni-851000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-851000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:16.186112    5759 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:16.186268    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:16.186273    5759 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:16.186276    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:16.186411    5759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:16.187544    5759 out.go:352] Setting JSON to false
	I1009 12:55:16.205083    5759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5086,"bootTime":1728498630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:55:16.205151    5759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:55:16.209867    5759 out.go:177] * [newest-cni-851000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:55:16.216929    5759 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:55:16.216998    5759 notify.go:220] Checking for updates...
	I1009 12:55:16.222868    5759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:55:16.225882    5759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:55:16.228873    5759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:55:16.231899    5759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:55:16.234891    5759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:55:16.238220    5759 config.go:182] Loaded profile config "default-k8s-diff-port-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:16.238279    5759 config.go:182] Loaded profile config "multinode-341000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:16.238338    5759 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:55:16.242866    5759 out.go:177] * Using the qemu2 driver based on user configuration
	I1009 12:55:16.249843    5759 start.go:297] selected driver: qemu2
	I1009 12:55:16.249849    5759 start.go:901] validating driver "qemu2" against <nil>
	I1009 12:55:16.249855    5759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:55:16.252308    5759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1009 12:55:16.252349    5759 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1009 12:55:16.256858    5759 out.go:177] * Automatically selected the socket_vmnet network
	I1009 12:55:16.263886    5759 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 12:55:16.263903    5759 cni.go:84] Creating CNI manager for ""
	I1009 12:55:16.263926    5759 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:55:16.263931    5759 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 12:55:16.263968    5759 start.go:340] cluster config:
	{Name:newest-cni-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:16.268682    5759 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:55:16.272925    5759 out.go:177] * Starting "newest-cni-851000" primary control-plane node in "newest-cni-851000" cluster
	I1009 12:55:16.279843    5759 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:55:16.279861    5759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:55:16.279870    5759 cache.go:56] Caching tarball of preloaded images
	I1009 12:55:16.279966    5759 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:55:16.279972    5759 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:55:16.280059    5759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/newest-cni-851000/config.json ...
	I1009 12:55:16.280071    5759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/newest-cni-851000/config.json: {Name:mkab4f280c9565334994b2d8341b69fba81d9283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 12:55:16.280342    5759 start.go:360] acquireMachinesLock for newest-cni-851000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:16.280394    5759 start.go:364] duration metric: took 45.917µs to acquireMachinesLock for "newest-cni-851000"
	I1009 12:55:16.280408    5759 start.go:93] Provisioning new machine with config: &{Name:newest-cni-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:55:16.280445    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:55:16.287786    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:55:16.305487    5759 start.go:159] libmachine.API.Create for "newest-cni-851000" (driver="qemu2")
	I1009 12:55:16.305514    5759 client.go:168] LocalClient.Create starting
	I1009 12:55:16.305605    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:55:16.305644    5759 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:16.305655    5759 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:16.305696    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:55:16.305727    5759 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:16.305735    5759 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:16.306157    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:55:16.461181    5759 main.go:141] libmachine: Creating SSH key...
	I1009 12:55:16.625251    5759 main.go:141] libmachine: Creating Disk image...
	I1009 12:55:16.625263    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:55:16.625500    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:16.636141    5759 main.go:141] libmachine: STDOUT: 
	I1009 12:55:16.636167    5759 main.go:141] libmachine: STDERR: 
	I1009 12:55:16.636229    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2 +20000M
	I1009 12:55:16.644759    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:55:16.644773    5759 main.go:141] libmachine: STDERR: 
	I1009 12:55:16.644789    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:16.644794    5759 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:55:16.644807    5759 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:16.644845    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d5:df:23:54:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:16.646704    5759 main.go:141] libmachine: STDOUT: 
	I1009 12:55:16.646717    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:16.646736    5759 client.go:171] duration metric: took 341.227125ms to LocalClient.Create
	I1009 12:55:18.648897    5759 start.go:128] duration metric: took 2.368493334s to createHost
	I1009 12:55:18.648984    5759 start.go:83] releasing machines lock for "newest-cni-851000", held for 2.368659167s
	W1009 12:55:18.649040    5759 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:18.660271    5759 out.go:177] * Deleting "newest-cni-851000" in qemu2 ...
	W1009 12:55:18.691714    5759 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:18.691746    5759 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:23.693801    5759 start.go:360] acquireMachinesLock for newest-cni-851000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:23.694245    5759 start.go:364] duration metric: took 345.583µs to acquireMachinesLock for "newest-cni-851000"
	I1009 12:55:23.694419    5759 start.go:93] Provisioning new machine with config: &{Name:newest-cni-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 12:55:23.694725    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I1009 12:55:23.704410    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 12:55:23.754219    5759 start.go:159] libmachine.API.Create for "newest-cni-851000" (driver="qemu2")
	I1009 12:55:23.754283    5759 client.go:168] LocalClient.Create starting
	I1009 12:55:23.754392    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/ca.pem
	I1009 12:55:23.754446    5759 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:23.754469    5759 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:23.754533    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19780-1164/.minikube/certs/cert.pem
	I1009 12:55:23.754564    5759 main.go:141] libmachine: Decoding PEM data...
	I1009 12:55:23.754574    5759 main.go:141] libmachine: Parsing certificate...
	I1009 12:55:23.755274    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1009 12:55:23.926793    5759 main.go:141] libmachine: Creating SSH key...
	I1009 12:55:24.080901    5759 main.go:141] libmachine: Creating Disk image...
	I1009 12:55:24.080913    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1009 12:55:24.081150    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2.raw /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:24.091125    5759 main.go:141] libmachine: STDOUT: 
	I1009 12:55:24.091145    5759 main.go:141] libmachine: STDERR: 
	I1009 12:55:24.091208    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2 +20000M
	I1009 12:55:24.099628    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I1009 12:55:24.099649    5759 main.go:141] libmachine: STDERR: 
	I1009 12:55:24.099660    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:24.099666    5759 main.go:141] libmachine: Starting QEMU VM...
	I1009 12:55:24.099675    5759 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:24.099716    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:97:d3:26:c7:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:24.101530    5759 main.go:141] libmachine: STDOUT: 
	I1009 12:55:24.101542    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:24.101556    5759 client.go:171] duration metric: took 347.280083ms to LocalClient.Create
	I1009 12:55:26.103686    5759 start.go:128] duration metric: took 2.409006292s to createHost
	I1009 12:55:26.103774    5759 start.go:83] releasing machines lock for "newest-cni-851000", held for 2.409580084s
	W1009 12:55:26.104180    5759 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:26.118171    5759 out.go:201] 
	W1009 12:55:26.123242    5759 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:26.123278    5759 out.go:270] * 
	* 
	W1009 12:55:26.125405    5759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:26.136927    5759 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (69.055042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-367000 create -f testdata/busybox.yaml: exit status 1 (29.627667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-367000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-367000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (33.19ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (33.035625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-367000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-367000 describe deploy/metrics-server -n kube-system: exit status 1 (27.405167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-367000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-367000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (33.734417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.192412458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-367000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-367000" primary control-plane node in "default-k8s-diff-port-367000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:26.993122    5824 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:26.993266    5824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:26.993270    5824 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:26.993272    5824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:26.993382    5824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:26.994454    5824 out.go:352] Setting JSON to false
	I1009 12:55:27.012036    5824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5097,"bootTime":1728498630,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:55:27.012100    5824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:55:27.016050    5824 out.go:177] * [default-k8s-diff-port-367000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:55:27.023075    5824 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:55:27.023141    5824 notify.go:220] Checking for updates...
	I1009 12:55:27.030104    5824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:55:27.033067    5824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:55:27.036053    5824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:55:27.039089    5824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:55:27.042068    5824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:55:27.045439    5824 config.go:182] Loaded profile config "default-k8s-diff-port-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:27.045729    5824 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:55:27.050060    5824 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:55:27.057013    5824 start.go:297] selected driver: qemu2
	I1009 12:55:27.057021    5824 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:27.057081    5824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:55:27.059702    5824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 12:55:27.059731    5824 cni.go:84] Creating CNI manager for ""
	I1009 12:55:27.059757    5824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:55:27.059781    5824 start.go:340] cluster config:
	{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-367000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:27.064298    5824 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:55:27.072917    5824 out.go:177] * Starting "default-k8s-diff-port-367000" primary control-plane node in "default-k8s-diff-port-367000" cluster
	I1009 12:55:27.077044    5824 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:55:27.077062    5824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:55:27.077073    5824 cache.go:56] Caching tarball of preloaded images
	I1009 12:55:27.077147    5824 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:55:27.077152    5824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:55:27.077214    5824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/default-k8s-diff-port-367000/config.json ...
	I1009 12:55:27.077685    5824 start.go:360] acquireMachinesLock for default-k8s-diff-port-367000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:27.077715    5824 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "default-k8s-diff-port-367000"
	I1009 12:55:27.077724    5824 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:27.077728    5824 fix.go:54] fixHost starting: 
	I1009 12:55:27.077844    5824 fix.go:112] recreateIfNeeded on default-k8s-diff-port-367000: state=Stopped err=<nil>
	W1009 12:55:27.077851    5824 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:27.081977    5824 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-367000" ...
	I1009 12:55:27.090024    5824 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:27.090055    5824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:09:2c:06:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:27.092227    5824 main.go:141] libmachine: STDOUT: 
	I1009 12:55:27.092248    5824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:27.092277    5824 fix.go:56] duration metric: took 14.5485ms for fixHost
	I1009 12:55:27.092283    5824 start.go:83] releasing machines lock for "default-k8s-diff-port-367000", held for 14.563583ms
	W1009 12:55:27.092289    5824 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:27.092317    5824 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:27.092321    5824 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:32.094350    5824 start.go:360] acquireMachinesLock for default-k8s-diff-port-367000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:32.094793    5824 start.go:364] duration metric: took 315.167µs to acquireMachinesLock for "default-k8s-diff-port-367000"
	I1009 12:55:32.094921    5824 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:32.094941    5824 fix.go:54] fixHost starting: 
	I1009 12:55:32.095622    5824 fix.go:112] recreateIfNeeded on default-k8s-diff-port-367000: state=Stopped err=<nil>
	W1009 12:55:32.095650    5824 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:32.105093    5824 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-367000" ...
	I1009 12:55:32.109021    5824 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:32.109271    5824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:90:09:2c:06:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/default-k8s-diff-port-367000/disk.qcow2
	I1009 12:55:32.119099    5824 main.go:141] libmachine: STDOUT: 
	I1009 12:55:32.119164    5824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:32.119247    5824 fix.go:56] duration metric: took 24.311ms for fixHost
	I1009 12:55:32.119271    5824 start.go:83] releasing machines lock for "default-k8s-diff-port-367000", held for 24.453958ms
	W1009 12:55:32.119464    5824 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:32.127041    5824 out.go:201] 
	W1009 12:55:32.131115    5824 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:32.131141    5824 out.go:270] * 
	* 
	W1009 12:55:32.133641    5824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:32.140040    5824 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (70.607167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.193499958s)

                                                
                                                
-- stdout --
	* [newest-cni-851000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-851000" primary control-plane node in "newest-cni-851000" cluster
	* Restarting existing qemu2 VM for "newest-cni-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-851000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:30.019336    5847 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:30.019495    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:30.019503    5847 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:30.019505    5847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:30.019642    5847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:30.020725    5847 out.go:352] Setting JSON to false
	I1009 12:55:30.038157    5847 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5100,"bootTime":1728498630,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 12:55:30.038230    5847 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 12:55:30.042160    5847 out.go:177] * [newest-cni-851000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 12:55:30.050060    5847 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 12:55:30.050118    5847 notify.go:220] Checking for updates...
	I1009 12:55:30.057116    5847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 12:55:30.060067    5847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 12:55:30.063130    5847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 12:55:30.066113    5847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 12:55:30.069075    5847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 12:55:30.072362    5847 config.go:182] Loaded profile config "newest-cni-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:30.072631    5847 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 12:55:30.077037    5847 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 12:55:30.084146    5847 start.go:297] selected driver: qemu2
	I1009 12:55:30.084154    5847 start.go:901] validating driver "qemu2" against &{Name:newest-cni-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-851000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:30.084236    5847 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 12:55:30.086799    5847 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 12:55:30.086819    5847 cni.go:84] Creating CNI manager for ""
	I1009 12:55:30.086841    5847 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 12:55:30.086866    5847 start.go:340] cluster config:
	{Name:newest-cni-851000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-851000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 12:55:30.091428    5847 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 12:55:30.098048    5847 out.go:177] * Starting "newest-cni-851000" primary control-plane node in "newest-cni-851000" cluster
	I1009 12:55:30.102081    5847 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 12:55:30.102098    5847 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 12:55:30.102109    5847 cache.go:56] Caching tarball of preloaded images
	I1009 12:55:30.102175    5847 preload.go:172] Found /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 12:55:30.102181    5847 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1009 12:55:30.102246    5847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/newest-cni-851000/config.json ...
	I1009 12:55:30.102729    5847 start.go:360] acquireMachinesLock for newest-cni-851000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:30.102762    5847 start.go:364] duration metric: took 26.541µs to acquireMachinesLock for "newest-cni-851000"
	I1009 12:55:30.102771    5847 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:30.102775    5847 fix.go:54] fixHost starting: 
	I1009 12:55:30.102898    5847 fix.go:112] recreateIfNeeded on newest-cni-851000: state=Stopped err=<nil>
	W1009 12:55:30.102906    5847 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:30.107051    5847 out.go:177] * Restarting existing qemu2 VM for "newest-cni-851000" ...
	I1009 12:55:30.115027    5847 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:30.115063    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:97:d3:26:c7:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:30.117413    5847 main.go:141] libmachine: STDOUT: 
	I1009 12:55:30.117432    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:30.117463    5847 fix.go:56] duration metric: took 14.687292ms for fixHost
	I1009 12:55:30.117467    5847 start.go:83] releasing machines lock for "newest-cni-851000", held for 14.701542ms
	W1009 12:55:30.117474    5847 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:30.117521    5847 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:30.117526    5847 start.go:729] Will try again in 5 seconds ...
	I1009 12:55:35.119520    5847 start.go:360] acquireMachinesLock for newest-cni-851000: {Name:mk6dc29eb74286120c58954bae0cd65a85a89b02 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 12:55:35.120038    5847 start.go:364] duration metric: took 439.416µs to acquireMachinesLock for "newest-cni-851000"
	I1009 12:55:35.120216    5847 start.go:96] Skipping create...Using existing machine configuration
	I1009 12:55:35.120240    5847 fix.go:54] fixHost starting: 
	I1009 12:55:35.121024    5847 fix.go:112] recreateIfNeeded on newest-cni-851000: state=Stopped err=<nil>
	W1009 12:55:35.121051    5847 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 12:55:35.131489    5847 out.go:177] * Restarting existing qemu2 VM for "newest-cni-851000" ...
	I1009 12:55:35.134381    5847 qemu.go:418] Using hvf for hardware acceleration
	I1009 12:55:35.134563    5847 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:97:d3:26:c7:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19780-1164/.minikube/machines/newest-cni-851000/disk.qcow2
	I1009 12:55:35.145653    5847 main.go:141] libmachine: STDOUT: 
	I1009 12:55:35.145724    5847 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1009 12:55:35.145827    5847 fix.go:56] duration metric: took 25.576584ms for fixHost
	I1009 12:55:35.145845    5847 start.go:83] releasing machines lock for "newest-cni-851000", held for 25.785ms
	W1009 12:55:35.146006    5847 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-851000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1009 12:55:35.154449    5847 out.go:201] 
	W1009 12:55:35.157530    5847 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1009 12:55:35.157555    5847 out.go:270] * 
	* 
	W1009 12:55:35.160351    5847 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 12:55:35.168409    5847 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-851000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (73.630541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-367000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (35.108958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-367000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-367000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.104583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-367000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-367000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (33.160333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-367000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (32.950792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-367000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-367000 --alsologtostderr -v=1: exit status 83 (45.695208ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-367000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-367000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:32.429185    5866 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:32.429364    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:32.429367    5866 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:32.429370    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:32.429507    5866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:32.429732    5866 out.go:352] Setting JSON to false
	I1009 12:55:32.429739    5866 mustload.go:65] Loading cluster: default-k8s-diff-port-367000
	I1009 12:55:32.429970    5866 config.go:182] Loaded profile config "default-k8s-diff-port-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:32.434484    5866 out.go:177] * The control-plane node default-k8s-diff-port-367000 host is not running: state=Stopped
	I1009 12:55:32.438494    5866 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-367000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-367000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (33.230333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (32.69025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-851000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (34.175042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-851000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-851000 --alsologtostderr -v=1: exit status 83 (46.332375ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-851000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-851000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 12:55:35.369613    5890 out.go:345] Setting OutFile to fd 1 ...
	I1009 12:55:35.369805    5890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:35.369809    5890 out.go:358] Setting ErrFile to fd 2...
	I1009 12:55:35.369811    5890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 12:55:35.369940    5890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 12:55:35.370194    5890 out.go:352] Setting JSON to false
	I1009 12:55:35.370201    5890 mustload.go:65] Loading cluster: newest-cni-851000
	I1009 12:55:35.370447    5890 config.go:182] Loaded profile config "newest-cni-851000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 12:55:35.374911    5890 out.go:177] * The control-plane node newest-cni-851000 host is not running: state=Stopped
	I1009 12:55:35.378862    5890 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-851000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-851000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (34.045875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-851000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (33.61775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (138/257)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 13.33
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.22
38 TestErrorSpam/setup 35.43
39 TestErrorSpam/start 0.36
40 TestErrorSpam/status 0.26
41 TestErrorSpam/pause 0.71
42 TestErrorSpam/unpause 0.65
43 TestErrorSpam/stop 64.28
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 44.68
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 57.13
50 TestFunctional/serial/KubeContext 0.03
51 TestFunctional/serial/KubectlGetPods 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
55 TestFunctional/serial/CacheCmd/cache/add_local 1.09
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.72
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
61 TestFunctional/serial/MinikubeKubectlCmd 0.72
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.16
63 TestFunctional/serial/ExtraConfig 34.37
64 TestFunctional/serial/ComponentHealth 0.05
65 TestFunctional/serial/LogsCmd 0.64
66 TestFunctional/serial/LogsFileCmd 0.6
67 TestFunctional/serial/InvalidService 4.21
69 TestFunctional/parallel/ConfigCmd 0.26
70 TestFunctional/parallel/DashboardCmd 13.48
71 TestFunctional/parallel/DryRun 0.24
72 TestFunctional/parallel/InternationalLanguage 0.12
73 TestFunctional/parallel/StatusCmd 0.24
78 TestFunctional/parallel/AddonsCmd 0.11
79 TestFunctional/parallel/PersistentVolumeClaim 25.6
81 TestFunctional/parallel/SSHCmd 0.18
82 TestFunctional/parallel/CpCmd 0.47
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.4
89 TestFunctional/parallel/NodeLabels 0.04
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
93 TestFunctional/parallel/License 0.25
94 TestFunctional/parallel/Version/short 0.04
95 TestFunctional/parallel/Version/components 0.18
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
100 TestFunctional/parallel/ImageCommands/ImageBuild 2.13
101 TestFunctional/parallel/ImageCommands/Setup 1.73
102 TestFunctional/parallel/DockerEnv/bash 0.3
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
106 TestFunctional/parallel/ServiceCmd/DeployApp 12.1
107 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
108 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
109 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
110 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.17
111 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
112 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
113 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
119 TestFunctional/parallel/ServiceCmd/List 0.13
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
122 TestFunctional/parallel/ServiceCmd/Format 0.09
123 TestFunctional/parallel/ServiceCmd/URL 0.09
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
131 TestFunctional/parallel/ProfileCmd/profile_list 0.13
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
133 TestFunctional/parallel/MountCmd/any-port 6.09
134 TestFunctional/parallel/MountCmd/specific-port 1.91
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
136 TestFunctional/delete_echo-server_images 0.07
137 TestFunctional/delete_my-image_image 0.02
138 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/CopyFile 0.04
156 TestImageBuild/serial/Setup 34.82
157 TestImageBuild/serial/NormalBuild 1.54
158 TestImageBuild/serial/BuildWithBuildArg 0.73
159 TestImageBuild/serial/BuildWithDockerIgnore 0.45
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.47
165 TestJSONOutput/start/Audit 0
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 6.17
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.21
192 TestMainNoArgs 0.04
193 TestMinikubeProfile 69.84
237 TestStoppedBinaryUpgrade/Setup 2.29
239 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 0.11
258 TestNoKubernetes/serial/Stop 1.88
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
274 TestStartStop/group/old-k8s-version/serial/Stop 1.89
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
285 TestStartStop/group/no-preload/serial/Stop 3.49
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
296 TestStartStop/group/embed-certs/serial/Stop 1.85
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.63
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
312 TestStartStop/group/newest-cni/serial/Stop 3.58
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1009 11:46:04.394420    1686 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1009 11:46:04.394945    1686 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-185000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-185000: exit status 85 (99.340667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-185000 | jenkins | v1.34.0 | 09 Oct 24 11:45 PDT |          |
	|         | -p download-only-185000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 11:45:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 11:45:34.333131    1687 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:45:34.333300    1687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:45:34.333304    1687 out.go:358] Setting ErrFile to fd 2...
	I1009 11:45:34.333306    1687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:45:34.333445    1687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	W1009 11:45:34.333534    1687 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19780-1164/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19780-1164/.minikube/config/config.json: no such file or directory
	I1009 11:45:34.334957    1687 out.go:352] Setting JSON to true
	I1009 11:45:34.353626    1687 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":904,"bootTime":1728498630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:45:34.353698    1687 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:45:34.358820    1687 out.go:97] [download-only-185000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:45:34.358941    1687 notify.go:220] Checking for updates...
	W1009 11:45:34.358980    1687 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 11:45:34.361823    1687 out.go:169] MINIKUBE_LOCATION=19780
	I1009 11:45:34.369785    1687 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:45:34.375801    1687 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:45:34.378826    1687 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:45:34.379972    1687 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	W1009 11:45:34.386898    1687 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 11:45:34.387102    1687 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:45:34.390838    1687 out.go:97] Using the qemu2 driver based on user configuration
	I1009 11:45:34.390860    1687 start.go:297] selected driver: qemu2
	I1009 11:45:34.390890    1687 start.go:901] validating driver "qemu2" against <nil>
	I1009 11:45:34.390978    1687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 11:45:34.394867    1687 out.go:169] Automatically selected the socket_vmnet network
	I1009 11:45:34.399100    1687 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1009 11:45:34.399233    1687 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 11:45:34.399269    1687 cni.go:84] Creating CNI manager for ""
	I1009 11:45:34.399300    1687 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 11:45:34.399348    1687 start.go:340] cluster config:
	{Name:download-only-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:45:34.404041    1687 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:45:34.407796    1687 out.go:97] Downloading VM boot image ...
	I1009 11:45:34.407813    1687 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1009 11:45:48.744440    1687 out.go:97] Starting "download-only-185000" primary control-plane node in "download-only-185000" cluster
	I1009 11:45:48.744460    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:45:48.802717    1687 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 11:45:48.802736    1687 cache.go:56] Caching tarball of preloaded images
	I1009 11:45:48.802954    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:45:48.808167    1687 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 11:45:48.808174    1687 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:45:48.888437    1687 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1009 11:46:02.906116    1687 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:46:02.906283    1687 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:46:03.601492    1687 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1009 11:46:03.601688    1687 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/download-only-185000/config.json ...
	I1009 11:46:03.601704    1687 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19780-1164/.minikube/profiles/download-only-185000/config.json: {Name:mkd0352330e63ea9488a0405dc95f822cf67234d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 11:46:03.601979    1687 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1009 11:46:03.602205    1687 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1009 11:46:04.346361    1687 out.go:193] 
	W1009 11:46:04.351405    1687 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19780-1164/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0 0x1074bcfa0] Decompressors:map[bz2:0x140003f9830 gz:0x140003f9838 tar:0x140003f9790 tar.bz2:0x140003f97a0 tar.gz:0x140003f97e0 tar.xz:0x140003f97f0 tar.zst:0x140003f9820 tbz2:0x140003f97a0 tgz:0x140003f97e0 txz:0x140003f97f0 tzst:0x140003f9820 xz:0x140003f9840 zip:0x140003f9850 zst:0x140003f9848] Getters:map[file:0x14000a18680 http:0x140008ba0a0 https:0x140008ba190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1009 11:46:04.351432    1687 out_reason.go:110] 
	W1009 11:46:04.358327    1687 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 11:46:04.361360    1687 out.go:193] 
	
	
	* The control-plane node download-only-185000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-185000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-185000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-856000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-856000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (13.333520334s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1009 11:46:18.105533    1686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1009 11:46:18.105591    1686 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-856000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-856000: exit status 85 (79.28625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-185000 | jenkins | v1.34.0 | 09 Oct 24 11:45 PDT |                     |
	|         | -p download-only-185000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Oct 24 11:46 PDT | 09 Oct 24 11:46 PDT |
	| delete  | -p download-only-185000        | download-only-185000 | jenkins | v1.34.0 | 09 Oct 24 11:46 PDT | 09 Oct 24 11:46 PDT |
	| start   | -o=json --download-only        | download-only-856000 | jenkins | v1.34.0 | 09 Oct 24 11:46 PDT |                     |
	|         | -p download-only-856000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 11:46:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 11:46:04.803055    1715 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:46:04.803208    1715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:46:04.803211    1715 out.go:358] Setting ErrFile to fd 2...
	I1009 11:46:04.803214    1715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:46:04.803361    1715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:46:04.804490    1715 out.go:352] Setting JSON to true
	I1009 11:46:04.821983    1715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":934,"bootTime":1728498630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:46:04.822066    1715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:46:04.827079    1715 out.go:97] [download-only-856000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:46:04.827199    1715 notify.go:220] Checking for updates...
	I1009 11:46:04.830946    1715 out.go:169] MINIKUBE_LOCATION=19780
	I1009 11:46:04.834006    1715 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:46:04.837818    1715 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:46:04.840981    1715 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:46:04.844010    1715 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	W1009 11:46:04.850023    1715 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 11:46:04.850184    1715 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:46:04.852955    1715 out.go:97] Using the qemu2 driver based on user configuration
	I1009 11:46:04.852965    1715 start.go:297] selected driver: qemu2
	I1009 11:46:04.852969    1715 start.go:901] validating driver "qemu2" against <nil>
	I1009 11:46:04.853036    1715 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 11:46:04.856048    1715 out.go:169] Automatically selected the socket_vmnet network
	I1009 11:46:04.861315    1715 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1009 11:46:04.861423    1715 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 11:46:04.861444    1715 cni.go:84] Creating CNI manager for ""
	I1009 11:46:04.861467    1715 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1009 11:46:04.861472    1715 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 11:46:04.861525    1715 start.go:340] cluster config:
	{Name:download-only-856000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-856000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:46:04.865789    1715 iso.go:125] acquiring lock: {Name:mka6a0d59ae6cc32794e4fbfa8e5c6ce6a65e504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 11:46:04.869015    1715 out.go:97] Starting "download-only-856000" primary control-plane node in "download-only-856000" cluster
	I1009 11:46:04.869024    1715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:46:04.931285    1715 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1009 11:46:04.931315    1715 cache.go:56] Caching tarball of preloaded images
	I1009 11:46:04.931554    1715 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1009 11:46:04.934880    1715 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1009 11:46:04.934888    1715 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1009 11:46:05.017939    1715 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19780-1164/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-856000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-856000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-856000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 11:46:18.632195    1686 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-639000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-639000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-639000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-953000
addons_test.go:935: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-953000: exit status 85 (60.495625ms)

                                                
                                                
-- stdout --
	* Profile "addons-953000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-953000
addons_test.go:946: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-953000: exit status 85 (64.383667ms)

                                                
                                                
-- stdout --
	* Profile "addons-953000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-953000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1009 12:51:45.730538    1686 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 12:51:45.730757    1686 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                    
x
+
TestErrorSpam/setup (35.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-987000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-987000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 --driver=qemu2 : (35.43303225s)
--- PASS: TestErrorSpam/setup (35.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop: (12.188969708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop: (26.055064459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-987000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-987000 stop: (26.034297083s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19780-1164/.minikube/files/etc/test/nested/copy/1686/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-517000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (44.678948709s)
--- PASS: TestFunctional/serial/StartWithProxy (44.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (57.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 11:50:10.526545    1686 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-517000 --alsologtostderr -v=8: (57.128288708s)
functional_test.go:663: soft start took 57.128745041s for "functional-517000" cluster.
I1009 11:51:07.656484    1686 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (57.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-517000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-517000 cache add registry.k8s.io/pause:3.1: (1.313065291s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3651314375/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache add minikube-local-cache-test:functional-517000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache delete minikube-local-cache-test:functional-517000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-517000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.091416ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 kubectl -- --context functional-517000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.72s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-517000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-517000 get pods: (1.158414666s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-517000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.367154417s)
functional_test.go:761: restart took 34.367273208s for "functional-517000" cluster.
I1009 11:51:49.196391    1686 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-517000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3009905217/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-517000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-517000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-517000: exit status 115 (145.433834ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31585 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-517000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 config get cpus: exit status 14 (35.778208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 config get cpus: exit status 14 (36.649834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-517000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-517000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2266: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-517000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (131.247708ms)

                                                
                                                
-- stdout --
	* [functional-517000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 11:52:38.278882    2249 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:52:38.279053    2249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:38.279057    2249 out.go:358] Setting ErrFile to fd 2...
	I1009 11:52:38.279059    2249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:38.279193    2249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:52:38.280349    2249 out.go:352] Setting JSON to false
	I1009 11:52:38.300644    2249 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1328,"bootTime":1728498630,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:52:38.300751    2249 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:52:38.305323    2249 out.go:177] * [functional-517000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1009 11:52:38.310302    2249 notify.go:220] Checking for updates...
	I1009 11:52:38.314284    2249 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 11:52:38.322279    2249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:52:38.330243    2249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:52:38.334368    2249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:52:38.337272    2249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 11:52:38.340322    2249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 11:52:38.343612    2249 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:52:38.343867    2249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:52:38.348262    2249 out.go:177] * Using the qemu2 driver based on existing profile
	I1009 11:52:38.355284    2249 start.go:297] selected driver: qemu2
	I1009 11:52:38.355298    2249 start.go:901] validating driver "qemu2" against &{Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:52:38.355373    2249 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 11:52:38.362324    2249 out.go:201] 
	W1009 11:52:38.366347    2249 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 11:52:38.369312    2249 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-517000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-517000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.855709ms)

                                                
                                                
-- stdout --
	* [functional-517000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 11:52:38.517423    2260 out.go:345] Setting OutFile to fd 1 ...
	I1009 11:52:38.517576    2260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:38.517579    2260 out.go:358] Setting ErrFile to fd 2...
	I1009 11:52:38.517581    2260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 11:52:38.517721    2260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
	I1009 11:52:38.519233    2260 out.go:352] Setting JSON to false
	I1009 11:52:38.537587    2260 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1328,"bootTime":1728498630,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1009 11:52:38.537680    2260 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1009 11:52:38.541267    2260 out.go:177] * [functional-517000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1009 11:52:38.549282    2260 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 11:52:38.549320    2260 notify.go:220] Checking for updates...
	I1009 11:52:38.556347    2260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	I1009 11:52:38.559313    2260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1009 11:52:38.562277    2260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 11:52:38.565327    2260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	I1009 11:52:38.568321    2260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 11:52:38.571675    2260 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1009 11:52:38.571960    2260 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 11:52:38.576694    2260 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1009 11:52:38.584326    2260 start.go:297] selected driver: qemu2
	I1009 11:52:38.584336    2260 start.go:901] validating driver "qemu2" against &{Name:functional-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 11:52:38.584408    2260 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 11:52:38.591318    2260 out.go:201] 
	W1009 11:52:38.595237    2260 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 11:52:38.599298    2260 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ac845b3d-9cbd-47f6-b00b-1a147a94e1fc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010706292s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-517000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-517000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-517000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-517000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [253b33d5-24b7-4ae5-bfcd-d4efcb2dd404] Pending
helpers_test.go:344: "sp-pod" [253b33d5-24b7-4ae5-bfcd-d4efcb2dd404] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [253b33d5-24b7-4ae5-bfcd-d4efcb2dd404] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.015889625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-517000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-517000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-517000 delete -f testdata/storage-provisioner/pod.yaml: (1.041184916s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-517000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [27c51341-669e-4026-b47c-799bff69ad8d] Pending
helpers_test.go:344: "sp-pod" [27c51341-669e-4026-b47c-799bff69ad8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [27c51341-669e-4026-b47c-799bff69ad8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011010083s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-517000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh -n functional-517000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cp functional-517000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2993999350/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh -n functional-517000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh -n functional-517000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1686/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /etc/test/nested/copy/1686/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1686.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /etc/ssl/certs/1686.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1686.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /usr/share/ca-certificates/1686.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /etc/ssl/certs/16862.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /usr/share/ca-certificates/16862.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-517000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "sudo systemctl is-active crio": exit status 1 (136.718375ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-517000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-517000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-517000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-517000 image ls --format short --alsologtostderr:
I1009 11:52:44.973474    2321 out.go:345] Setting OutFile to fd 1 ...
I1009 11:52:44.973878    2321 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:44.973882    2321 out.go:358] Setting ErrFile to fd 2...
I1009 11:52:44.973885    2321 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:44.974035    2321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 11:52:44.974507    2321 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:44.974568    2321 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:44.975447    2321 ssh_runner.go:195] Run: systemctl --version
I1009 11:52:44.975456    2321 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
I1009 11:52:44.996559    2321 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-517000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | latest            | 048e090385966 | 197MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-517000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-517000 | 897b631308e57 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-517000 image ls --format table --alsologtostderr:
I1009 11:52:45.124531    2325 out.go:345] Setting OutFile to fd 1 ...
I1009 11:52:45.124764    2325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.124772    2325 out.go:358] Setting ErrFile to fd 2...
I1009 11:52:45.124774    2325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.124916    2325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 11:52:45.125381    2325 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.125443    2325 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.126276    2325 ssh_runner.go:195] Run: systemctl --version
I1009 11:52:45.126285    2325 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
I1009 11:52:45.147311    2325 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-517000 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-517000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6
db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"897b631308e57a61fd5ef79f51225bd05659b639eeecf6e9030f8b892daa5c42","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-517000"],"size":"30"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"50800000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigest
s":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-517000 image ls --format json --alsologtostderr:
I1009 11:52:45.049936    2323 out.go:345] Setting OutFile to fd 1 ...
I1009 11:52:45.050150    2323 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.050157    2323 out.go:358] Setting ErrFile to fd 2...
I1009 11:52:45.050160    2323 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.050311    2323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 11:52:45.050785    2323 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.050862    2323 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.051843    2323 ssh_runner.go:195] Run: systemctl --version
I1009 11:52:45.051851    2323 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
I1009 11:52:45.073008    2323 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-517000 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 897b631308e57a61fd5ef79f51225bd05659b639eeecf6e9030f8b892daa5c42
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-517000
size: "30"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-517000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-517000 image ls --format yaml --alsologtostderr:
I1009 11:52:45.197891    2327 out.go:345] Setting OutFile to fd 1 ...
I1009 11:52:45.198091    2327 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.198095    2327 out.go:358] Setting ErrFile to fd 2...
I1009 11:52:45.198098    2327 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.198281    2327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 11:52:45.198746    2327 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.198808    2327 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.199705    2327 ssh_runner.go:195] Run: systemctl --version
I1009 11:52:45.199714    2327 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
I1009 11:52:45.220897    2327 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh pgrep buildkitd: exit status 1 (60.740208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image build -t localhost/my-image:functional-517000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-517000 image build -t localhost/my-image:functional-517000 testdata/build --alsologtostderr: (1.993456167s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-517000 image build -t localhost/my-image:functional-517000 testdata/build --alsologtostderr:
I1009 11:52:45.332293    2331 out.go:345] Setting OutFile to fd 1 ...
I1009 11:52:45.332580    2331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.332586    2331 out.go:358] Setting ErrFile to fd 2...
I1009 11:52:45.332588    2331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 11:52:45.332732    2331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19780-1164/.minikube/bin
I1009 11:52:45.333213    2331 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.333949    2331 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1009 11:52:45.334812    2331 ssh_runner.go:195] Run: systemctl --version
I1009 11:52:45.334820    2331 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19780-1164/.minikube/machines/functional-517000/id_rsa Username:docker}
I1009 11:52:45.356128    2331 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.293694695.tar
I1009 11:52:45.356206    2331 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 11:52:45.359944    2331 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.293694695.tar
I1009 11:52:45.361748    2331 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.293694695.tar: stat -c "%s %y" /var/lib/minikube/build/build.293694695.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.293694695.tar': No such file or directory
I1009 11:52:45.361764    2331 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.293694695.tar --> /var/lib/minikube/build/build.293694695.tar (3072 bytes)
I1009 11:52:45.372301    2331 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.293694695
I1009 11:52:45.376424    2331 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.293694695 -xf /var/lib/minikube/build/build.293694695.tar
I1009 11:52:45.380436    2331 docker.go:360] Building image: /var/lib/minikube/build/build.293694695
I1009 11:52:45.380492    2331 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-517000 /var/lib/minikube/build/build.293694695
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:17b2cd67846b49407bc44200e7a3c276b373f857686527d6c82ce403bc2cff70 done
#8 naming to localhost/my-image:functional-517000 done
#8 DONE 0.0s
I1009 11:52:47.268562    2331 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-517000 /var/lib/minikube/build/build.293694695: (1.888076459s)
I1009 11:52:47.268667    2331 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.293694695
I1009 11:52:47.272438    2331 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.293694695.tar
I1009 11:52:47.275871    2331 build_images.go:217] Built localhost/my-image:functional-517000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.293694695.tar
I1009 11:52:47.275890    2331 build_images.go:133] succeeded building to: functional-517000
I1009 11:52:47.275893    2331 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
2024/10/09 11:52:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.709168416s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-517000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-517000 docker-env) && out/minikube-darwin-arm64 status -p functional-517000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-517000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-517000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-517000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-5rqvj" [2afe3a72-b0b3-4d6f-a81a-8ccadcc5b83b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-5rqvj" [2afe3a72-b0b3-4d6f-a81a-8ccadcc5b83b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.011106333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image load --daemon kicbase/echo-server:functional-517000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image load --daemon kicbase/echo-server:functional-517000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-517000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image load --daemon kicbase/echo-server:functional-517000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image save kicbase/echo-server:functional-517000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image rm kicbase/echo-server:functional-517000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-517000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 image save --daemon kicbase/echo-server:functional-517000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-517000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2138: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-517000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3b78d17b-3f81-4fb2-9efe-f2442fa525e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3b78d17b-3f81-4fb2-9efe-f2442fa525e6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005847416s
I1009 11:52:10.238881    1686 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service list -o json
functional_test.go:1494: Took "82.975375ms" to run "out/minikube-darwin-arm64 -p functional-517000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30561
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30561
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-517000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.161.56 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1009 11:52:10.323366    1686 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1009 11:52:10.369699    1686 config.go:182] Loaded profile config "functional-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-517000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "96.832958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.815625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "98.686709ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.572083ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728499954959023000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728499954959023000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728499954959023000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001/test-1728499954959023000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p": (1.491663167s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:52 test-1728499954959023000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh cat /mount-9p/test-1728499954959023000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-517000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [13c3b555-4808-498a-823e-c7a3b19d83de] Pending
helpers_test.go:344: "busybox-mount" [13c3b555-4808-498a-823e-c7a3b19d83de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [13c3b555-4808-498a-823e-c7a3b19d83de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [13c3b555-4808-498a-823e-c7a3b19d83de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005329167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-517000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3362515138/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port553683268/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.303334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 11:52:41.105028    1686 retry.go:31] will retry after 378.850015ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.386167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 11:52:41.546451    1686 retry.go:31] will retry after 978.663657ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port553683268/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "sudo umount -f /mount-9p": exit status 1 (64.472041ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-517000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port553683268/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount1: exit status 1 (73.57375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 11:52:43.031085    1686 retry.go:31] will retry after 351.674283ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount3: exit status 1 (63.966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 11:52:43.582015    1686 retry.go:31] will retry after 734.041045ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-517000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-517000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-517000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4194558153/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-517000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-517000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-517000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-845000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-692000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-692000 --driver=qemu2 : (34.824565s)
--- PASS: TestImageBuild/serial/Setup (34.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-692000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-692000: (1.539299292s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-692000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-692000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-692000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-945000 --output=json --user=testUser: (6.164840167s)
--- PASS: TestJSONOutput/stop/Command (6.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-834000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-834000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.761667ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d626efa-1b27-42b4-afc1-394e520514e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-834000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87a18cf0-adb5-47bc-b9b1-96923b9e34d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"67e106d4-b355-46be-abca-b475e2eab3f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig"}}
	{"specversion":"1.0","id":"3d68fb7b-1776-4fa7-b34c-ed2d73f9b773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2a79280f-6259-4291-afec-e0818403cfd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8f51bca1-b13f-4360-8035-b719cfd9d217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube"}}
	{"specversion":"1.0","id":"581ed76c-e6c4-4f31-8413-1f6c18261fa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"95404fc6-5139-4998-a640-f075264debe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-834000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (69.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-333000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-333000 --driver=qemu2 : (34.674854917s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-334000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-334000 --driver=qemu2 : (34.501254459s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-333000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-334000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-334000
helpers_test.go:175: Cleaning up "first-333000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-333000
--- PASS: TestMinikubeProfile (69.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-220000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-206000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.914791ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-206000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19780-1164/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19780-1164/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-206000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-206000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.749542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-206000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-206000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-206000
W1009 12:51:47.733962    1686 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1009 12:51:47.734181    1686 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1009 12:51:47.734215    1686 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit
I1009 12:51:48.226104    1686 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0 0x104e7e3c0] Decompressors:map[bz2:0x1400091ce20 gz:0x1400091ce28 tar:0x1400091cdd0 tar.bz2:0x1400091cde0 tar.gz:0x1400091cdf0 tar.xz:0x1400091ce00 tar.zst:0x1400091ce10 tbz2:0x1400091cde0 tgz:0x1400091cdf0 txz:0x1400091ce00 tzst:0x1400091ce10 xz:0x1400091ce30 zip:0x1400091ce40 zst:0x1400091ce38] Getters:map[file:0x140006c2970 http:0x140008c8a50 https:0x140008c8aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1009 12:51:48.226238    1686 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3034884057/001/docker-machine-driver-hyperkit
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-206000: (1.884396458s)
--- PASS: TestNoKubernetes/serial/Stop (1.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-206000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-206000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.727625ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-206000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-206000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-462000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-462000 --alsologtostderr -v=3: (1.886863s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-462000 -n old-k8s-version-462000: exit status 7 (60.94ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-462000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-089000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-089000 --alsologtostderr -v=3: (3.4929655s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-089000 -n no-preload-089000: exit status 7 (63.247541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-089000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-266000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-266000 --alsologtostderr -v=3: (1.84879225s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-266000 -n embed-certs-266000: exit status 7 (58.605166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-266000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-367000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-367000 --alsologtostderr -v=3: (3.631616125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-851000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-851000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-851000 --alsologtostderr -v=3: (3.576589292s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (58.194042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-367000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-851000 -n newest-cni-851000: exit status 7 (59.99225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-851000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/257)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-311000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-311000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-311000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-311000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-311000"

                                                
                                                
----------------------- debugLogs end: cilium-311000 [took: 2.330900333s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-311000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-311000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-292000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard