Test Report: QEMU_macOS 19312

                    
                      759e2b673c985a1fcc212824ad6ad48c6b3dc495:2024-07-31:35593
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.68
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.03
55 TestCertOptions 10.11
56 TestCertExpiration 195.26
57 TestDockerFlags 10.15
58 TestForceSystemdFlag 10.01
59 TestForceSystemdEnv 11.38
104 TestFunctional/parallel/ServiceCmdConnect 32.27
176 TestMultiControlPlane/serial/StopSecondaryNode 214.11
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 33.05
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.03
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.44
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.03
183 TestMultiControlPlane/serial/StopCluster 202.08
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.99
193 TestJSONOutput/start/Command 10.02
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.1
225 TestMountStart/serial/StartWithMountFirst 10.09
228 TestMultiNode/serial/FreshStart2Nodes 9.88
229 TestMultiNode/serial/DeployApp2Nodes 115.85
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 49.52
237 TestMultiNode/serial/RestartKeepsNodes 8.8
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.44
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20.04
245 TestPreload 9.96
247 TestScheduledStopUnix 10.02
248 TestSkaffold 12.31
251 TestRunningBinaryUpgrade 600.65
253 TestKubernetesUpgrade 17.11
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.78
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.2
269 TestStoppedBinaryUpgrade/Upgrade 587.11
271 TestPause/serial/Start 10.22
281 TestNoKubernetes/serial/StartWithK8s 9.96
282 TestNoKubernetes/serial/StartWithStopK8s 5.32
283 TestNoKubernetes/serial/Start 5.28
287 TestNoKubernetes/serial/StartNoArgs 5.32
289 TestNetworkPlugins/group/auto/Start 9.75
290 TestNetworkPlugins/group/calico/Start 9.77
291 TestNetworkPlugins/group/custom-flannel/Start 9.91
292 TestNetworkPlugins/group/false/Start 9.83
293 TestNetworkPlugins/group/kindnet/Start 9.81
294 TestNetworkPlugins/group/flannel/Start 9.69
295 TestNetworkPlugins/group/enable-default-cni/Start 9.91
296 TestNetworkPlugins/group/bridge/Start 9.99
297 TestNetworkPlugins/group/kubenet/Start 9.89
299 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 9.95
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/no-preload/serial/SecondStart 5.22
318 TestStartStop/group/embed-certs/serial/FirstStart 10.03
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
322 TestStartStop/group/no-preload/serial/Pause 0.13
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.66
325 TestStartStop/group/embed-certs/serial/DeployApp 0.1
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
332 TestStartStop/group/embed-certs/serial/SecondStart 5.25
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.87
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/embed-certs/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/FirstStart 10.3
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.24
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.677875709s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba28d24b-3e9f-458c-a22e-e0e3cd9da7f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-010000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"197ebe73-05df-41ee-aa66-3f2b2c8b8746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"532d1fd1-956a-456b-ba6a-f8aebb2ef969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig"}}
	{"specversion":"1.0","id":"cac1c55b-24bc-496c-8be9-00711dc306b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"807aab0c-4864-462e-96ab-ed554e109f5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38ef0906-7422-47eb-85d9-132456450959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube"}}
	{"specversion":"1.0","id":"93579089-2e7e-4b95-b05d-bd1789c93e52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a74603e1-2800-416d-bb44-5f946843f4c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0258a192-5e6e-413c-94d1-a2835d60e3a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9564897c-18e6-4498-8c48-3435a410c2de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"029704bf-4f41-4c12-adb0-ecd974274794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-010000\" primary control-plane node in \"download-only-010000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"74526c7d-c500-469d-98ea-369765bbbe42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"917b233d-07f4-4c41-833a-df04cc934c97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0] Decompressors:map[bz2:0x14000813800 gz:0x14000813808 tar:0x140008137b0 tar.bz2:0x140008137c0 tar.gz:0x140008137d0 tar.xz:0x140008137e0 tar.zst:0x140008137f0 tbz2:0x140008137c0 tgz:0x14
0008137d0 txz:0x140008137e0 tzst:0x140008137f0 xz:0x14000813810 zip:0x14000813820 zst:0x14000813818] Getters:map[file:0x14000816740 http:0x140009c4500 https:0x140009c45a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b0dcd1bc-5d3b-41e4-98e9-0bf9fcdba7ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:26:08.650089    1915 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:26:08.650269    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:08.650275    1915 out.go:304] Setting ErrFile to fd 2...
	I0731 14:26:08.650278    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:08.650403    1915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	W0731 14:26:08.650541    1915 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1411/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1411/.minikube/config/config.json: no such file or directory
	I0731 14:26:08.651861    1915 out.go:298] Setting JSON to true
	I0731 14:26:08.669389    1915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1532,"bootTime":1722459636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:26:08.669472    1915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:26:08.673851    1915 out.go:97] [download-only-010000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:26:08.673984    1915 notify.go:220] Checking for updates...
	W0731 14:26:08.674010    1915 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 14:26:08.677751    1915 out.go:169] MINIKUBE_LOCATION=19312
	I0731 14:26:08.680778    1915 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:26:08.687891    1915 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:26:08.691823    1915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:26:08.694777    1915 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	W0731 14:26:08.702722    1915 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:26:08.702909    1915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:26:08.706854    1915 out.go:97] Using the qemu2 driver based on user configuration
	I0731 14:26:08.706877    1915 start.go:297] selected driver: qemu2
	I0731 14:26:08.706883    1915 start.go:901] validating driver "qemu2" against <nil>
	I0731 14:26:08.706969    1915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:26:08.709789    1915 out.go:169] Automatically selected the socket_vmnet network
	I0731 14:26:08.715580    1915 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 14:26:08.715710    1915 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:26:08.715758    1915 cni.go:84] Creating CNI manager for ""
	I0731 14:26:08.715776    1915 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 14:26:08.715821    1915 start.go:340] cluster config:
	{Name:download-only-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-010000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:26:08.721527    1915 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:26:08.724830    1915 out.go:97] Downloading VM boot image ...
	I0731 14:26:08.724855    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 14:26:15.086061    1915 out.go:97] Starting "download-only-010000" primary control-plane node in "download-only-010000" cluster
	I0731 14:26:15.086087    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:15.150445    1915 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:15.150458    1915 cache.go:56] Caching tarball of preloaded images
	I0731 14:26:15.150644    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:15.157783    1915 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 14:26:15.157791    1915 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:15.236048    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:22.197669    1915 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:22.197856    1915 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:22.893289    1915 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 14:26:22.893476    1915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-010000/config.json ...
	I0731 14:26:22.893492    1915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-010000/config.json: {Name:mk96c76876e8a3ab2d7cc57c5d91f2c6bf7fab17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:26:22.893707    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:22.893890    1915 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 14:26:23.263094    1915 out.go:169] 
	W0731 14:26:23.268107    1915 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0] Decompressors:map[bz2:0x14000813800 gz:0x14000813808 tar:0x140008137b0 tar.bz2:0x140008137c0 tar.gz:0x140008137d0 tar.xz:0x140008137e0 tar.zst:0x140008137f0 tbz2:0x140008137c0 tgz:0x140008137d0 txz:0x140008137e0 tzst:0x140008137f0 xz:0x14000813810 zip:0x14000813820 zst:0x14000813818] Getters:map[file:0x14000816740 http:0x140009c4500 https:0x140009c45a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 14:26:23.268148    1915 out_reason.go:110] 
	W0731 14:26:23.274014    1915 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:26:23.277968    1915 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-010000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-254000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-254000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.88186625s)

                                                
                                                
-- stdout --
	* [offline-docker-254000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-254000" primary control-plane node in "offline-docker-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:03:35.296206    4497 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:03:35.296370    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:35.296374    4497 out.go:304] Setting ErrFile to fd 2...
	I0731 15:03:35.296377    4497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:35.296505    4497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:03:35.297609    4497 out.go:298] Setting JSON to false
	I0731 15:03:35.315253    4497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3779,"bootTime":1722459636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:03:35.315330    4497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:03:35.317907    4497 out.go:177] * [offline-docker-254000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:03:35.326244    4497 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:03:35.326283    4497 notify.go:220] Checking for updates...
	I0731 15:03:35.333174    4497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:03:35.336344    4497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:03:35.339152    4497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:03:35.342162    4497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:03:35.345220    4497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:03:35.346749    4497 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:35.346805    4497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:03:35.350156    4497 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:03:35.357091    4497 start.go:297] selected driver: qemu2
	I0731 15:03:35.357100    4497 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:03:35.357108    4497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:03:35.359001    4497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:03:35.362147    4497 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:03:35.365364    4497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:03:35.365395    4497 cni.go:84] Creating CNI manager for ""
	I0731 15:03:35.365402    4497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:03:35.365405    4497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:03:35.365445    4497 start.go:340] cluster config:
	{Name:offline-docker-254000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:03:35.369115    4497 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:35.376184    4497 out.go:177] * Starting "offline-docker-254000" primary control-plane node in "offline-docker-254000" cluster
	I0731 15:03:35.380195    4497 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:03:35.380219    4497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:03:35.380232    4497 cache.go:56] Caching tarball of preloaded images
	I0731 15:03:35.380315    4497 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:03:35.380322    4497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:03:35.380381    4497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/offline-docker-254000/config.json ...
	I0731 15:03:35.380392    4497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/offline-docker-254000/config.json: {Name:mkc2a8b8cb91d2db354d94a55440969411911bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:03:35.380676    4497 start.go:360] acquireMachinesLock for offline-docker-254000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:35.380719    4497 start.go:364] duration metric: took 33.5µs to acquireMachinesLock for "offline-docker-254000"
	I0731 15:03:35.380732    4497 start.go:93] Provisioning new machine with config: &{Name:offline-docker-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:35.380768    4497 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:35.385205    4497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:35.400997    4497 start.go:159] libmachine.API.Create for "offline-docker-254000" (driver="qemu2")
	I0731 15:03:35.401037    4497 client.go:168] LocalClient.Create starting
	I0731 15:03:35.401116    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:35.401147    4497 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:35.401156    4497 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:35.401196    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:35.401218    4497 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:35.401230    4497 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:35.401588    4497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:35.553948    4497 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:35.740158    4497 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:35.740169    4497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:35.740400    4497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:35.750063    4497 main.go:141] libmachine: STDOUT: 
	I0731 15:03:35.750084    4497 main.go:141] libmachine: STDERR: 
	I0731 15:03:35.750137    4497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2 +20000M
	I0731 15:03:35.762101    4497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:35.762119    4497 main.go:141] libmachine: STDERR: 
	I0731 15:03:35.762143    4497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:35.762149    4497 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:35.762160    4497 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:35.762188    4497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:da:d9:10:d8:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:35.763862    4497 main.go:141] libmachine: STDOUT: 
	I0731 15:03:35.763876    4497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:35.763894    4497 client.go:171] duration metric: took 362.858ms to LocalClient.Create
	I0731 15:03:37.764322    4497 start.go:128] duration metric: took 2.383587333s to createHost
	I0731 15:03:37.764353    4497 start.go:83] releasing machines lock for "offline-docker-254000", held for 2.383672167s
	W0731 15:03:37.764373    4497 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:37.769387    4497 out.go:177] * Deleting "offline-docker-254000" in qemu2 ...
	W0731 15:03:37.782760    4497 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:37.782772    4497 start.go:729] Will try again in 5 seconds ...
	I0731 15:03:42.784918    4497 start.go:360] acquireMachinesLock for offline-docker-254000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:42.785383    4497 start.go:364] duration metric: took 344.083µs to acquireMachinesLock for "offline-docker-254000"
	I0731 15:03:42.785527    4497 start.go:93] Provisioning new machine with config: &{Name:offline-docker-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:42.785857    4497 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:42.794341    4497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:42.844402    4497 start.go:159] libmachine.API.Create for "offline-docker-254000" (driver="qemu2")
	I0731 15:03:42.844458    4497 client.go:168] LocalClient.Create starting
	I0731 15:03:42.844567    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:42.844630    4497 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:42.844674    4497 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:42.844746    4497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:42.844790    4497 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:42.844808    4497 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:42.845328    4497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:43.006048    4497 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:43.077635    4497 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:43.077644    4497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:43.077848    4497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:43.087231    4497 main.go:141] libmachine: STDOUT: 
	I0731 15:03:43.087254    4497 main.go:141] libmachine: STDERR: 
	I0731 15:03:43.087314    4497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2 +20000M
	I0731 15:03:43.095132    4497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:43.095149    4497 main.go:141] libmachine: STDERR: 
	I0731 15:03:43.095167    4497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:43.095173    4497 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:43.095183    4497 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:43.095209    4497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:f8:75:57:0e:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/offline-docker-254000/disk.qcow2
	I0731 15:03:43.096804    4497 main.go:141] libmachine: STDOUT: 
	I0731 15:03:43.096818    4497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:43.096830    4497 client.go:171] duration metric: took 252.369917ms to LocalClient.Create
	I0731 15:03:45.099006    4497 start.go:128] duration metric: took 2.313157792s to createHost
	I0731 15:03:45.099153    4497 start.go:83] releasing machines lock for "offline-docker-254000", held for 2.313736458s
	W0731 15:03:45.099520    4497 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:45.118163    4497 out.go:177] 
	W0731 15:03:45.122017    4497 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:03:45.122041    4497 out.go:239] * 
	* 
	W0731 15:03:45.124512    4497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:03:45.135049    4497 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-254000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-31 15:03:45.150697 -0700 PDT m=+2256.595384459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-254000 -n offline-docker-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-254000 -n offline-docker-254000: exit status 7 (66.760458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-254000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-254000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-254000
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.842994709s)

                                                
                                                
-- stdout --
	* [cert-options-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-991000" primary control-plane node in "cert-options-991000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-991000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-991000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-991000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.224291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-991000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-991000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-991000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-991000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (38.289166ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-991000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-991000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-31 15:04:16.81462 -0700 PDT m=+2288.259888167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-991000 -n cert-options-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-991000 -n cert-options-991000: exit status 7 (28.779583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-991000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-991000
--- FAIL: TestCertOptions (10.11s)

                                                
                                    
x
+
TestCertExpiration (195.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.87218525s)

                                                
                                                
-- stdout --
	* [cert-expiration-885000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-885000" primary control-plane node in "cert-expiration-885000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-885000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-885000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.23755325s)

                                                
                                                
-- stdout --
	* [cert-expiration-885000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-885000" primary control-plane node in "cert-expiration-885000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-885000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-885000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-885000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-885000" primary control-plane node in "cert-expiration-885000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-885000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-31 15:07:16.834185 -0700 PDT m=+2468.275560876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-885000 -n cert-expiration-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-885000 -n cert-expiration-885000: exit status 7 (68.161792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-885000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-885000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-885000
--- FAIL: TestCertExpiration (195.26s)

                                                
                                    
x
+
TestDockerFlags (10.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-700000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-700000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.911525917s)

                                                
                                                
-- stdout --
	* [docker-flags-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-700000" primary control-plane node in "docker-flags-700000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-700000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:03:56.696956    4697 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:03:56.697087    4697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:56.697091    4697 out.go:304] Setting ErrFile to fd 2...
	I0731 15:03:56.697093    4697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:56.697223    4697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:03:56.698306    4697 out.go:298] Setting JSON to false
	I0731 15:03:56.714358    4697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3800,"bootTime":1722459636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:03:56.714428    4697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:03:56.720716    4697 out.go:177] * [docker-flags-700000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:03:56.728890    4697 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:03:56.728955    4697 notify.go:220] Checking for updates...
	I0731 15:03:56.735790    4697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:03:56.741719    4697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:03:56.744798    4697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:03:56.747855    4697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:03:56.749303    4697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:03:56.753102    4697 config.go:182] Loaded profile config "force-systemd-flag-762000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:56.753175    4697 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:56.753234    4697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:03:56.757812    4697 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:03:56.762823    4697 start.go:297] selected driver: qemu2
	I0731 15:03:56.762831    4697 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:03:56.762839    4697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:03:56.765197    4697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:03:56.767820    4697 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:03:56.770911    4697 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 15:03:56.770927    4697 cni.go:84] Creating CNI manager for ""
	I0731 15:03:56.770934    4697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:03:56.770937    4697 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:03:56.770964    4697 start.go:340] cluster config:
	{Name:docker-flags-700000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:03:56.774643    4697 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:56.782840    4697 out.go:177] * Starting "docker-flags-700000" primary control-plane node in "docker-flags-700000" cluster
	I0731 15:03:56.786723    4697 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:03:56.786739    4697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:03:56.786755    4697 cache.go:56] Caching tarball of preloaded images
	I0731 15:03:56.786816    4697 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:03:56.786823    4697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:03:56.786895    4697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/docker-flags-700000/config.json ...
	I0731 15:03:56.786906    4697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/docker-flags-700000/config.json: {Name:mkd201fd2106510aeac05d6c7b232d631b0206b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:03:56.787119    4697 start.go:360] acquireMachinesLock for docker-flags-700000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:56.787155    4697 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "docker-flags-700000"
	I0731 15:03:56.787169    4697 start.go:93] Provisioning new machine with config: &{Name:docker-flags-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:56.787198    4697 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:56.794795    4697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:56.811776    4697 start.go:159] libmachine.API.Create for "docker-flags-700000" (driver="qemu2")
	I0731 15:03:56.811804    4697 client.go:168] LocalClient.Create starting
	I0731 15:03:56.811861    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:56.811893    4697 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:56.811901    4697 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:56.811938    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:56.811960    4697 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:56.811967    4697 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:56.812355    4697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:56.966196    4697 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:57.118942    4697 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:57.118953    4697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:57.119151    4697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:03:57.128811    4697 main.go:141] libmachine: STDOUT: 
	I0731 15:03:57.128831    4697 main.go:141] libmachine: STDERR: 
	I0731 15:03:57.128892    4697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2 +20000M
	I0731 15:03:57.136891    4697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:57.136906    4697 main.go:141] libmachine: STDERR: 
	I0731 15:03:57.136926    4697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:03:57.136931    4697 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:57.136942    4697 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:57.136965    4697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4a:2d:be:71:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:03:57.138610    4697 main.go:141] libmachine: STDOUT: 
	I0731 15:03:57.138625    4697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:57.138641    4697 client.go:171] duration metric: took 326.8375ms to LocalClient.Create
	I0731 15:03:59.140766    4697 start.go:128] duration metric: took 2.3535945s to createHost
	I0731 15:03:59.140820    4697 start.go:83] releasing machines lock for "docker-flags-700000", held for 2.353698375s
	W0731 15:03:59.140946    4697 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:59.163945    4697 out.go:177] * Deleting "docker-flags-700000" in qemu2 ...
	W0731 15:03:59.186506    4697 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:59.186523    4697 start.go:729] Will try again in 5 seconds ...
	I0731 15:04:04.188662    4697 start.go:360] acquireMachinesLock for docker-flags-700000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:04:04.189000    4697 start.go:364] duration metric: took 256.667µs to acquireMachinesLock for "docker-flags-700000"
	I0731 15:04:04.189068    4697 start.go:93] Provisioning new machine with config: &{Name:docker-flags-700000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-700000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:04:04.189304    4697 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:04:04.197829    4697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:04:04.238727    4697 start.go:159] libmachine.API.Create for "docker-flags-700000" (driver="qemu2")
	I0731 15:04:04.238786    4697 client.go:168] LocalClient.Create starting
	I0731 15:04:04.238899    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:04:04.238957    4697 main.go:141] libmachine: Decoding PEM data...
	I0731 15:04:04.238972    4697 main.go:141] libmachine: Parsing certificate...
	I0731 15:04:04.239028    4697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:04:04.239069    4697 main.go:141] libmachine: Decoding PEM data...
	I0731 15:04:04.239079    4697 main.go:141] libmachine: Parsing certificate...
	I0731 15:04:04.239658    4697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:04:04.400470    4697 main.go:141] libmachine: Creating SSH key...
	I0731 15:04:04.508852    4697 main.go:141] libmachine: Creating Disk image...
	I0731 15:04:04.508857    4697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:04:04.509047    4697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:04:04.518350    4697 main.go:141] libmachine: STDOUT: 
	I0731 15:04:04.518374    4697 main.go:141] libmachine: STDERR: 
	I0731 15:04:04.518444    4697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2 +20000M
	I0731 15:04:04.526261    4697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:04:04.526282    4697 main.go:141] libmachine: STDERR: 
	I0731 15:04:04.526294    4697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:04:04.526298    4697 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:04:04.526306    4697 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:04:04.526338    4697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5c:c4:56:f7:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/docker-flags-700000/disk.qcow2
	I0731 15:04:04.527963    4697 main.go:141] libmachine: STDOUT: 
	I0731 15:04:04.527978    4697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:04:04.527990    4697 client.go:171] duration metric: took 289.204042ms to LocalClient.Create
	I0731 15:04:06.530126    4697 start.go:128] duration metric: took 2.340833584s to createHost
	I0731 15:04:06.530170    4697 start.go:83] releasing machines lock for "docker-flags-700000", held for 2.3411925s
	W0731 15:04:06.530537    4697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-700000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:04:06.545284    4697 out.go:177] 
	W0731 15:04:06.556407    4697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:04:06.556458    4697 out.go:239] * 
	* 
	W0731 15:04:06.559152    4697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:04:06.566134    4697 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-700000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-700000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-700000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.527875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-700000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-700000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-700000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-700000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-700000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-700000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-700000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-700000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-700000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.807042ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-700000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-700000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-700000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-700000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-700000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-700000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 15:04:06.71108 -0700 PDT m=+2278.156162292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-700000 -n docker-flags-700000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-700000 -n docker-flags-700000: exit status 7 (28.291584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-700000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-700000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-700000
--- FAIL: TestDockerFlags (10.15s)

                                                
                                    
x
+
TestForceSystemdFlag (10.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-762000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-762000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.810447375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-762000" primary control-plane node in "force-systemd-flag-762000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-762000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:03:51.736730    4670 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:03:51.736855    4670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:51.736858    4670 out.go:304] Setting ErrFile to fd 2...
	I0731 15:03:51.736861    4670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:51.736996    4670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:03:51.738002    4670 out.go:298] Setting JSON to false
	I0731 15:03:51.753984    4670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3795,"bootTime":1722459636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:03:51.754055    4670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:03:51.759935    4670 out.go:177] * [force-systemd-flag-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:03:51.766915    4670 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:03:51.766999    4670 notify.go:220] Checking for updates...
	I0731 15:03:51.774871    4670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:03:51.777875    4670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:03:51.780928    4670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:03:51.782220    4670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:03:51.784909    4670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:03:51.788280    4670 config.go:182] Loaded profile config "force-systemd-env-397000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:51.788357    4670 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:51.788409    4670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:03:51.792819    4670 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:03:51.799930    4670 start.go:297] selected driver: qemu2
	I0731 15:03:51.799935    4670 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:03:51.799941    4670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:03:51.802186    4670 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:03:51.804935    4670 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:03:51.808013    4670 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 15:03:51.808040    4670 cni.go:84] Creating CNI manager for ""
	I0731 15:03:51.808049    4670 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:03:51.808056    4670 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:03:51.808088    4670 start.go:340] cluster config:
	{Name:force-systemd-flag-762000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:03:51.811728    4670 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:51.817879    4670 out.go:177] * Starting "force-systemd-flag-762000" primary control-plane node in "force-systemd-flag-762000" cluster
	I0731 15:03:51.821901    4670 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:03:51.821917    4670 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:03:51.821930    4670 cache.go:56] Caching tarball of preloaded images
	I0731 15:03:51.822009    4670 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:03:51.822017    4670 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:03:51.822090    4670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/force-systemd-flag-762000/config.json ...
	I0731 15:03:51.822101    4670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/force-systemd-flag-762000/config.json: {Name:mkb84a4248c77e230628c8954b31b1e767dbcd71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:03:51.822313    4670 start.go:360] acquireMachinesLock for force-systemd-flag-762000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:51.822348    4670 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "force-systemd-flag-762000"
	I0731 15:03:51.822361    4670 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:51.822389    4670 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:51.829927    4670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:51.847135    4670 start.go:159] libmachine.API.Create for "force-systemd-flag-762000" (driver="qemu2")
	I0731 15:03:51.847160    4670 client.go:168] LocalClient.Create starting
	I0731 15:03:51.847215    4670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:51.847244    4670 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:51.847252    4670 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:51.847288    4670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:51.847310    4670 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:51.847319    4670 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:51.847705    4670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:51.997794    4670 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:52.043452    4670 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:52.043457    4670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:52.043620    4670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:52.052594    4670 main.go:141] libmachine: STDOUT: 
	I0731 15:03:52.052612    4670 main.go:141] libmachine: STDERR: 
	I0731 15:03:52.052667    4670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2 +20000M
	I0731 15:03:52.060322    4670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:52.060337    4670 main.go:141] libmachine: STDERR: 
	I0731 15:03:52.060358    4670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:52.060366    4670 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:52.060378    4670 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:52.060406    4670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:94:12:8d:28:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:52.061898    4670 main.go:141] libmachine: STDOUT: 
	I0731 15:03:52.061913    4670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:52.061929    4670 client.go:171] duration metric: took 214.768208ms to LocalClient.Create
	I0731 15:03:54.064202    4670 start.go:128] duration metric: took 2.241808292s to createHost
	I0731 15:03:54.064318    4670 start.go:83] releasing machines lock for "force-systemd-flag-762000", held for 2.2420005s
	W0731 15:03:54.064378    4670 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:54.084671    4670 out.go:177] * Deleting "force-systemd-flag-762000" in qemu2 ...
	W0731 15:03:54.106599    4670 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:54.106630    4670 start.go:729] Will try again in 5 seconds ...
	I0731 15:03:59.108854    4670 start.go:360] acquireMachinesLock for force-systemd-flag-762000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:59.141077    4670 start.go:364] duration metric: took 32.106375ms to acquireMachinesLock for "force-systemd-flag-762000"
	I0731 15:03:59.141171    4670 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:59.141442    4670 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:59.151954    4670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:59.202913    4670 start.go:159] libmachine.API.Create for "force-systemd-flag-762000" (driver="qemu2")
	I0731 15:03:59.202955    4670 client.go:168] LocalClient.Create starting
	I0731 15:03:59.203081    4670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:59.203146    4670 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:59.203161    4670 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:59.203223    4670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:59.203271    4670 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:59.203282    4670 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:59.203871    4670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:59.367012    4670 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:59.448773    4670 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:59.448779    4670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:59.448972    4670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:59.458096    4670 main.go:141] libmachine: STDOUT: 
	I0731 15:03:59.458112    4670 main.go:141] libmachine: STDERR: 
	I0731 15:03:59.458150    4670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2 +20000M
	I0731 15:03:59.465871    4670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:59.465885    4670 main.go:141] libmachine: STDERR: 
	I0731 15:03:59.465898    4670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:59.465903    4670 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:59.465913    4670 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:59.465938    4670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:73:e9:7e:52:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-flag-762000/disk.qcow2
	I0731 15:03:59.467514    4670 main.go:141] libmachine: STDOUT: 
	I0731 15:03:59.467530    4670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:59.467543    4670 client.go:171] duration metric: took 264.585167ms to LocalClient.Create
	I0731 15:04:01.469675    4670 start.go:128] duration metric: took 2.328236625s to createHost
	I0731 15:04:01.469746    4670 start.go:83] releasing machines lock for "force-systemd-flag-762000", held for 2.3286905s
	W0731 15:04:01.470162    4670 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-762000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-762000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:04:01.484761    4670 out.go:177] 
	W0731 15:04:01.496102    4670 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:04:01.496225    4670 out.go:239] * 
	* 
	W0731 15:04:01.498755    4670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:04:01.505796    4670 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-762000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-762000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-762000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.252125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-762000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-762000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-762000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 15:04:01.598679 -0700 PDT m=+2273.043667292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-762000 -n force-systemd-flag-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-762000 -n force-systemd-flag-762000: exit status 7 (33.467334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-762000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-762000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-762000
--- FAIL: TestForceSystemdFlag (10.01s)

                                                
                                    
x
+
TestForceSystemdEnv (11.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-397000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-397000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.183333417s)

                                                
                                                
-- stdout --
	* [force-systemd-env-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-397000" primary control-plane node in "force-systemd-env-397000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-397000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:03:45.321844    4638 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:03:45.321961    4638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:45.321965    4638 out.go:304] Setting ErrFile to fd 2...
	I0731 15:03:45.321967    4638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:45.322093    4638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:03:45.323241    4638 out.go:298] Setting JSON to false
	I0731 15:03:45.339613    4638 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3789,"bootTime":1722459636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:03:45.339684    4638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:03:45.345646    4638 out.go:177] * [force-systemd-env-397000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:03:45.353665    4638 notify.go:220] Checking for updates...
	I0731 15:03:45.358520    4638 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:03:45.367625    4638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:03:45.376565    4638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:03:45.384528    4638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:03:45.392513    4638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:03:45.400599    4638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 15:03:45.404894    4638 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:45.404935    4638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:03:45.408591    4638 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:03:45.416584    4638 start.go:297] selected driver: qemu2
	I0731 15:03:45.416588    4638 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:03:45.416593    4638 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:03:45.418774    4638 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:03:45.421623    4638 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:03:45.424651    4638 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 15:03:45.424665    4638 cni.go:84] Creating CNI manager for ""
	I0731 15:03:45.424671    4638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:03:45.424675    4638 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:03:45.424700    4638 start.go:340] cluster config:
	{Name:force-systemd-env-397000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:03:45.428310    4638 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:45.436601    4638 out.go:177] * Starting "force-systemd-env-397000" primary control-plane node in "force-systemd-env-397000" cluster
	I0731 15:03:45.440568    4638 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:03:45.440584    4638 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:03:45.440599    4638 cache.go:56] Caching tarball of preloaded images
	I0731 15:03:45.440677    4638 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:03:45.440684    4638 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:03:45.440754    4638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/force-systemd-env-397000/config.json ...
	I0731 15:03:45.440764    4638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/force-systemd-env-397000/config.json: {Name:mk21188f13c3d957cc11c4677b0e5a33a311c3a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:03:45.440957    4638 start.go:360] acquireMachinesLock for force-systemd-env-397000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:45.440990    4638 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "force-systemd-env-397000"
	I0731 15:03:45.441002    4638 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:45.441029    4638 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:45.444614    4638 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:45.461793    4638 start.go:159] libmachine.API.Create for "force-systemd-env-397000" (driver="qemu2")
	I0731 15:03:45.461820    4638 client.go:168] LocalClient.Create starting
	I0731 15:03:45.461876    4638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:45.461917    4638 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:45.461926    4638 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:45.461962    4638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:45.461985    4638 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:45.461993    4638 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:45.462314    4638 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:45.614074    4638 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:45.788260    4638 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:45.788267    4638 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:45.788435    4638 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:45.797802    4638 main.go:141] libmachine: STDOUT: 
	I0731 15:03:45.797816    4638 main.go:141] libmachine: STDERR: 
	I0731 15:03:45.797871    4638 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2 +20000M
	I0731 15:03:45.806237    4638 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:45.806257    4638 main.go:141] libmachine: STDERR: 
	I0731 15:03:45.806275    4638 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:45.806280    4638 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:45.806287    4638 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:45.806321    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e7:f0:12:28:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:45.808088    4638 main.go:141] libmachine: STDOUT: 
	I0731 15:03:45.808106    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:45.808126    4638 client.go:171] duration metric: took 346.307917ms to LocalClient.Create
	I0731 15:03:47.810225    4638 start.go:128] duration metric: took 2.369219666s to createHost
	I0731 15:03:47.810277    4638 start.go:83] releasing machines lock for "force-systemd-env-397000", held for 2.369325458s
	W0731 15:03:47.810298    4638 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:47.819970    4638 out.go:177] * Deleting "force-systemd-env-397000" in qemu2 ...
	W0731 15:03:47.830600    4638 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:47.830613    4638 start.go:729] Will try again in 5 seconds ...
	I0731 15:03:52.832762    4638 start.go:360] acquireMachinesLock for force-systemd-env-397000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:54.064503    4638 start.go:364] duration metric: took 1.231641791s to acquireMachinesLock for "force-systemd-env-397000"
	I0731 15:03:54.064662    4638 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-397000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-397000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:54.064893    4638 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:54.077597    4638 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 15:03:54.127161    4638 start.go:159] libmachine.API.Create for "force-systemd-env-397000" (driver="qemu2")
	I0731 15:03:54.127206    4638 client.go:168] LocalClient.Create starting
	I0731 15:03:54.127351    4638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:54.127414    4638 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:54.127432    4638 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:54.127494    4638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:54.127542    4638 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:54.127553    4638 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:54.128197    4638 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:54.291402    4638 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:54.416659    4638 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:54.416667    4638 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:54.416900    4638 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:54.426102    4638 main.go:141] libmachine: STDOUT: 
	I0731 15:03:54.426120    4638 main.go:141] libmachine: STDERR: 
	I0731 15:03:54.426165    4638 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2 +20000M
	I0731 15:03:54.433922    4638 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:54.433938    4638 main.go:141] libmachine: STDERR: 
	I0731 15:03:54.433948    4638 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:54.433952    4638 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:54.433961    4638 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:54.433997    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:1f:b0:12:45:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/force-systemd-env-397000/disk.qcow2
	I0731 15:03:54.435613    4638 main.go:141] libmachine: STDOUT: 
	I0731 15:03:54.435631    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:54.435647    4638 client.go:171] duration metric: took 308.44125ms to LocalClient.Create
	I0731 15:03:56.437804    4638 start.go:128] duration metric: took 2.372900667s to createHost
	I0731 15:03:56.437860    4638 start.go:83] releasing machines lock for "force-systemd-env-397000", held for 2.373353833s
	W0731 15:03:56.438372    4638 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:56.445944    4638 out.go:177] 
	W0731 15:03:56.451061    4638 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:03:56.451090    4638 out.go:239] * 
	* 
	W0731 15:03:56.454044    4638 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:03:56.463988    4638 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-397000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-397000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-397000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.144959ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-397000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-397000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-397000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 15:03:56.558946 -0700 PDT m=+2268.003842417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-397000 -n force-systemd-env-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-397000 -n force-systemd-env-397000: exit status 7 (34.643166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-397000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-397000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-397000
--- FAIL: TestForceSystemdEnv (11.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-430000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-430000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-2khsn" [e72c02f0-60b3-466c-92f5-ba57ac6e0442] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-2khsn" [e72c02f0-60b3-466c-92f5-ba57ac6e0442] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004337958s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31774
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31774: Get "http://192.168.105.4:31774": dial tcp 192.168.105.4:31774: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-430000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-2khsn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-430000/192.168.105.4
Start Time:       Wed, 31 Jul 2024 14:37:37 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://229e91fe0833780e35aecccc0345b2fd32526f1c6546a9d30323f7380c4c4d4e
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 31 Jul 2024 14:37:58 -0700
Finished:     Wed, 31 Jul 2024 14:37:58 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 31 Jul 2024 14:37:41 -0700
Finished:     Wed, 31 Jul 2024 14:37:41 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtg96 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mtg96:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-2khsn to functional-430000
Normal   Pulling    30s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     28s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 2.844s (2.844s including waiting). Image size: 84957542 bytes.
Normal   Created    10s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    10s (x3 over 26s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-2khsn_default(e72c02f0-60b3-466c-92f5-ba57ac6e0442)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-430000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-430000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.160.183
IPs:                      10.108.160.183
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31774/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-430000 -n functional-430000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:37 PDT | 31 Jul 24 14:37 PDT |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh -- ls                                                                                        | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:37 PDT | 31 Jul 24 14:37 PDT |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh cat                                                                                          | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:37 PDT | 31 Jul 24 14:37 PDT |
	|           | /mount-9p/test-1722461879165022000                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh stat                                                                                         | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | /mount-9p/created-by-test                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh stat                                                                                         | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | /mount-9p/created-by-pod                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh sudo                                                                                         | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port23762041/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh -- ls                                                                                        | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh sudo                                                                                         | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| mount     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-430000 ssh findmnt                                                                                      | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT | 31 Jul 24 14:38 PDT |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-430000                                                                                               | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-430000 --dry-run                                                                                     | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                 | functional-430000 | jenkins | v1.33.1 | 31 Jul 24 14:38 PDT |                     |
	|           | -p functional-430000                                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:38:07
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:38:07.393834    2913 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:38:07.393984    2913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.393987    2913 out.go:304] Setting ErrFile to fd 2...
	I0731 14:38:07.393989    2913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.394112    2913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:38:07.395144    2913 out.go:298] Setting JSON to false
	I0731 14:38:07.411914    2913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2251,"bootTime":1722459636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:38:07.411997    2913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:38:07.416736    2913 out.go:177] * [functional-430000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:38:07.423747    2913 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:38:07.423824    2913 notify.go:220] Checking for updates...
	I0731 14:38:07.430799    2913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:38:07.433719    2913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:38:07.436750    2913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:38:07.439692    2913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:38:07.442766    2913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:38:07.445989    2913 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:38:07.446239    2913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:38:07.450748    2913 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 14:38:07.457642    2913 start.go:297] selected driver: qemu2
	I0731 14:38:07.457648    2913 start.go:901] validating driver "qemu2" against &{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:38:07.457688    2913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:38:07.459881    2913 cni.go:84] Creating CNI manager for ""
	I0731 14:38:07.459897    2913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 14:38:07.459953    2913 start.go:340] cluster config:
	{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-430000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:38:07.470584    2913 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jul 31 21:38:01 functional-430000 dockerd[5817]: time="2024-07-31T21:38:01.469063805Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:38:01 functional-430000 dockerd[5817]: time="2024-07-31T21:38:01.472978627Z" level=warning msg="cleanup warnings time=\"2024-07-31T21:38:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 31 21:38:02 functional-430000 dockerd[5810]: time="2024-07-31T21:38:02.987216776Z" level=info msg="ignoring event" container=f414951d936a77609c9f5feeb9a0d83f1ec5eab52e163615b01d279d5bbcffe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:38:02 functional-430000 dockerd[5817]: time="2024-07-31T21:38:02.987398718Z" level=info msg="shim disconnected" id=f414951d936a77609c9f5feeb9a0d83f1ec5eab52e163615b01d279d5bbcffe2 namespace=moby
	Jul 31 21:38:02 functional-430000 dockerd[5817]: time="2024-07-31T21:38:02.987472454Z" level=warning msg="cleaning up after shim disconnected" id=f414951d936a77609c9f5feeb9a0d83f1ec5eab52e163615b01d279d5bbcffe2 namespace=moby
	Jul 31 21:38:02 functional-430000 dockerd[5817]: time="2024-07-31T21:38:02.987477247Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.382407616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.382437169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.382442838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.382470098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.409928222Z" level=info msg="shim disconnected" id=da416329c2e9a71c102e841ffa21731e29af009c07a3b5e41118922495fc6cb1 namespace=moby
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.409973364Z" level=warning msg="cleaning up after shim disconnected" id=da416329c2e9a71c102e841ffa21731e29af009c07a3b5e41118922495fc6cb1 namespace=moby
	Jul 31 21:38:05 functional-430000 dockerd[5817]: time="2024-07-31T21:38:05.409997790Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 21:38:05 functional-430000 dockerd[5810]: time="2024-07-31T21:38:05.409958025Z" level=info msg="ignoring event" container=da416329c2e9a71c102e841ffa21731e29af009c07a3b5e41118922495fc6cb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.330166643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.330193653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.330198905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.330224832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:08 functional-430000 cri-dockerd[6064]: time="2024-07-31T21:38:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e6176d5d02410511f8d6a3c1ef24ed8de37738ee3c7f040e3f240c715e2da2d8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.376262207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.376303639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.376316560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:08 functional-430000 dockerd[5817]: time="2024-07-31T21:38:08.376350990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 21:38:08 functional-430000 cri-dockerd[6064]: time="2024-07-31T21:38:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a28490bd3454a66a835b042da87813497c3925c5ae186d6b1e764a65d79974a0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 21:38:08 functional-430000 dockerd[5810]: time="2024-07-31T21:38:08.610745160Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	da416329c2e9a       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   015cc47b06f72       hello-node-65f5d5cc78-vn8mp
	16fc72cbc3261       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   f414951d936a7       busybox-mount
	229e91fe08337       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   932e78e80ed9f       hello-node-connect-6f49f58cd5-2khsn
	e34d57a9d9a56       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         25 seconds ago       Running             myfrontend                0                   6d6e8d5f562fc       sp-pod
	f0b5630032133       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         40 seconds ago       Running             nginx                     0                   14e2eee699d14       nginx-svc
	921a4b81756b8       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   8849ad138ccca       storage-provisioner
	58003a5a520ff       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   c66a32cbe1d2a       coredns-7db6d8ff4d-d4mw2
	6e8514724432b       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   8849ad138ccca       storage-provisioner
	72a84a79a24c9       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   7cb907762efef       kube-proxy-9hzkh
	5b7e6dbba540b       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   742724a90d544       kube-scheduler-functional-430000
	3c204acd74fb6       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   7cb7dcb1b1c52       kube-controller-manager-functional-430000
	047e8e95dca63       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   b714e41212a88       etcd-functional-430000
	a1a8b974b76a8       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   fdce4dde3e7aa       kube-apiserver-functional-430000
	6744a0f907d0e       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   441155cc37ab9       coredns-7db6d8ff4d-d4mw2
	5d622453fbd78       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   a4713623e0da6       kube-proxy-9hzkh
	08b1abb6bc010       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   c2b99f1e99a26       etcd-functional-430000
	6613f65c2df5c       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   a96135bc6c1e0       kube-controller-manager-functional-430000
	cef496eb0cf87       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   72709f72c1a18       kube-scheduler-functional-430000
	
	
	==> coredns [58003a5a520f] <==
	[INFO] 127.0.0.1:46421 - 10607 "HINFO IN 3355236736461759874.2664643722651295821. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009560556s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[265980230]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 21:36:35.825) (total time: 30002ms):
	Trace[265980230]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (21:37:05.827)
	Trace[265980230]: [30.002928334s] [30.002928334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1867066666]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 21:36:35.825) (total time: 30002ms):
	Trace[1867066666]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (21:37:05.828)
	Trace[1867066666]: [30.002916877s] [30.002916877s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1994526493]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 21:36:35.825) (total time: 30002ms):
	Trace[1994526493]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (21:37:05.828)
	Trace[1994526493]: [30.002806584s] [30.002806584s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:10366 - 55588 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000093662s
	[INFO] 10.244.0.1:43345 - 3933 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000089744s
	[INFO] 10.244.0.1:48801 - 23215 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036556s
	[INFO] 10.244.0.1:60085 - 61461 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000994225s
	[INFO] 10.244.0.1:35918 - 16631 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000054605s
	[INFO] 10.244.0.1:53182 - 61654 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000277026s
	
	
	==> coredns [6744a0f907d0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48412 - 9384 "HINFO IN 8553448202216950197.2164475016946301542. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009763366s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-430000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-430000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=functional-430000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T14_35_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:35:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-430000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:38:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:38:06 +0000   Wed, 31 Jul 2024 21:35:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:38:06 +0000   Wed, 31 Jul 2024 21:35:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:38:06 +0000   Wed, 31 Jul 2024 21:35:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:38:06 +0000   Wed, 31 Jul 2024 21:35:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-430000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4f7f96b5d8e4f3a935c76960c197cb0
	  System UUID:                d4f7f96b5d8e4f3a935c76960c197cb0
	  Boot ID:                    e70f7a59-ac09-4c3a-bdf0-2efb2958c72b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-vn8mp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     hello-node-connect-6f49f58cd5-2khsn          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 coredns-7db6d8ff4d-d4mw2                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m38s
	  kube-system                 etcd-functional-430000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-apiserver-functional-430000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-functional-430000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-proxy-9hzkh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-scheduler-functional-430000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-cd2d2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-kdxxp        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m37s                  kube-proxy       
	  Normal  Starting                 93s                    kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s                  kubelet          Node functional-430000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m52s                  kubelet          Node functional-430000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s                  kubelet          Node functional-430000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m49s                  kubelet          Node functional-430000 status is now: NodeReady
	  Normal  RegisteredNode           2m39s                  node-controller  Node functional-430000 event: Registered Node functional-430000 in Controller
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node functional-430000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node functional-430000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node functional-430000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-430000 event: Registered Node functional-430000 in Controller
	  Normal  Starting                 97s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)      kubelet          Node functional-430000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)      kubelet          Node functional-430000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)      kubelet          Node functional-430000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           82s                    node-controller  Node functional-430000 event: Registered Node functional-430000 in Controller
	
	
	==> dmesg <==
	[Jul31 21:36] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.497220] systemd-fstab-generator[4905]: Ignoring "noauto" option for root device
	[  +9.896202] systemd-fstab-generator[5337]: Ignoring "noauto" option for root device
	[  +0.052857] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.095088] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.094679] systemd-fstab-generator[5382]: Ignoring "noauto" option for root device
	[  +0.090803] systemd-fstab-generator[5396]: Ignoring "noauto" option for root device
	[  +5.102125] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.339882] systemd-fstab-generator[6017]: Ignoring "noauto" option for root device
	[  +0.091117] systemd-fstab-generator[6029]: Ignoring "noauto" option for root device
	[  +0.091933] systemd-fstab-generator[6041]: Ignoring "noauto" option for root device
	[  +0.102725] systemd-fstab-generator[6056]: Ignoring "noauto" option for root device
	[  +0.223232] systemd-fstab-generator[6223]: Ignoring "noauto" option for root device
	[  +0.937367] systemd-fstab-generator[6348]: Ignoring "noauto" option for root device
	[  +1.283646] kauditd_printk_skb: 194 callbacks suppressed
	[ +13.703830] kauditd_printk_skb: 37 callbacks suppressed
	[Jul31 21:37] systemd-fstab-generator[7482]: Ignoring "noauto" option for root device
	[  +4.989952] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.092531] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.972343] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.074896] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.729442] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.585311] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.379817] kauditd_printk_skb: 20 callbacks suppressed
	[Jul31 21:38] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [047e8e95dca6] <==
	{"level":"info","ts":"2024-07-31T21:36:32.995567Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T21:36:32.996704Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:36:32.996831Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:36:32.996863Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:36:32.996957Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T21:36:32.996978Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T21:36:32.997872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-31T21:36:32.997915Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-31T21:36:32.997972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:36:32.998001Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:36:33.984355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T21:36:33.984401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:36:33.98453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T21:36:33.984543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T21:36:33.984546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T21:36:33.984562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-31T21:36:33.984584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T21:36:33.985665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:36:33.985738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:36:33.985802Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:36:33.985811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:36:33.985671Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-430000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:36:33.986727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T21:36:33.986986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:37:41.026597Z","caller":"traceutil/trace.go:171","msg":"trace[1553600422] transaction","detail":"{read_only:false; response_revision:714; number_of_response:1; }","duration":"100.403918ms","start":"2024-07-31T21:37:40.926182Z","end":"2024-07-31T21:37:41.026586Z","steps":["trace[1553600422] 'process raft request'  (duration: 100.347313ms)"],"step_count":1}
	
	
	==> etcd [08b1abb6bc01] <==
	{"level":"info","ts":"2024-07-31T21:35:49.633836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:35:50.921335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T21:35:50.921501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:35:50.921542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-31T21:35:50.921572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T21:35:50.921588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T21:35:50.921613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T21:35:50.92164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T21:35:50.923977Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-430000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:35:50.924044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:35:50.924874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:35:50.927858Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:35:50.927991Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:35:50.928923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:35:50.931476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T21:36:18.411338Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T21:36:18.411372Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-430000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-31T21:36:18.411415Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:36:18.411458Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:36:18.430521Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T21:36:18.430547Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T21:36:18.430571Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-31T21:36:18.432576Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T21:36:18.432614Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T21:36:18.432618Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-430000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 21:38:09 up 3 min,  0 users,  load average: 0.97, 0.49, 0.20
	Linux functional-430000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a1a8b974b76a] <==
	I0731 21:36:34.574469       1 aggregator.go:165] initial CRD sync complete...
	I0731 21:36:34.574471       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 21:36:34.574474       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 21:36:34.574476       1 cache.go:39] Caches are synced for autoregister controller
	I0731 21:36:34.617337       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 21:36:34.617350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 21:36:34.617355       1 policy_source.go:224] refreshing policies
	I0731 21:36:34.622744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 21:36:35.468633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 21:36:35.572999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0731 21:36:35.573562       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 21:36:35.575113       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 21:36:35.897116       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 21:36:35.900907       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 21:36:35.912242       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 21:36:35.920029       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 21:36:35.921995       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 21:37:21.540088       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.35.249"}
	I0731 21:37:26.285681       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.98.94"}
	I0731 21:37:37.641479       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 21:37:37.684440       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.160.183"}
	I0731 21:37:52.004545       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.245.54"}
	I0731 21:38:07.939611       1 controller.go:615] quota admission added evaluator for: namespaces
	I0731 21:38:08.017874       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.221.238"}
	I0731 21:38:08.025444       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.13.70"}
	
	
	==> kube-controller-manager [3c204acd74fb] <==
	I0731 21:37:58.884586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.135µs"
	I0731 21:38:05.340836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="21.841µs"
	I0731 21:38:05.940413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.676µs"
	I0731 21:38:07.967908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.029756ms"
	E0731 21:38:07.967963       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.973418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.069443ms"
	E0731 21:38:07.973552       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.974105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.109838ms"
	E0731 21:38:07.974819       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.979298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.721028ms"
	E0731 21:38:07.979314       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.982206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="7.370888ms"
	E0731 21:38:07.982299       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.982937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.590908ms"
	E0731 21:38:07.982975       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.987455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.543314ms"
	E0731 21:38:07.987588       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 21:38:07.996820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.083448ms"
	I0731 21:38:08.005414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.452495ms"
	I0731 21:38:08.014238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.804227ms"
	I0731 21:38:08.014291       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="16.006µs"
	I0731 21:38:08.043808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="15.649529ms"
	I0731 21:38:08.049707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.873461ms"
	I0731 21:38:08.062305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.570975ms"
	I0731 21:38:08.062354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="25.676µs"
	
	
	==> kube-controller-manager [6613f65c2df5] <==
	I0731 21:36:03.613210       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 21:36:03.615359       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 21:36:03.615516       1 shared_informer.go:320] Caches are synced for HPA
	I0731 21:36:03.616468       1 shared_informer.go:320] Caches are synced for PV protection
	I0731 21:36:03.618620       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 21:36:03.618650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 21:36:03.618650       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 21:36:03.618654       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 21:36:03.629939       1 shared_informer.go:320] Caches are synced for deployment
	I0731 21:36:03.629945       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 21:36:03.629959       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 21:36:03.631606       1 shared_informer.go:320] Caches are synced for taint
	I0731 21:36:03.631700       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 21:36:03.631777       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-430000"
	I0731 21:36:03.631825       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 21:36:03.632009       1 shared_informer.go:320] Caches are synced for crt configmap
	I0731 21:36:03.704652       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:36:03.717001       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 21:36:03.731216       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 21:36:03.734532       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 21:36:03.797855       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 21:36:03.812269       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 21:36:04.244765       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:36:04.289901       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 21:36:04.289914       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5d622453fbd7] <==
	I0731 21:35:53.191663       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:35:53.200952       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 21:35:53.208398       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:35:53.208414       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:35:53.208421       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:35:53.209036       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:35:53.209096       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:35:53.209104       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:35:53.209574       1 config.go:192] "Starting service config controller"
	I0731 21:35:53.209587       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:35:53.209599       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:35:53.209673       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:35:53.209895       1 config.go:319] "Starting node config controller"
	I0731 21:35:53.209917       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:35:53.310236       1 shared_informer.go:320] Caches are synced for node config
	I0731 21:35:53.310246       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:35:53.310265       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [72a84a79a24c] <==
	I0731 21:36:35.830155       1 server_linux.go:69] "Using iptables proxy"
	I0731 21:36:35.833679       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 21:36:35.845722       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 21:36:35.845746       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:36:35.845756       1 server_linux.go:165] "Using iptables Proxier"
	I0731 21:36:35.849334       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 21:36:35.849466       1 server.go:872] "Version info" version="v1.30.3"
	I0731 21:36:35.849477       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:36:35.849930       1 config.go:192] "Starting service config controller"
	I0731 21:36:35.849941       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:36:35.849989       1 config.go:101] "Starting endpoint slice config controller"
	I0731 21:36:35.849995       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:36:35.850216       1 config.go:319] "Starting node config controller"
	I0731 21:36:35.850244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:36:35.950655       1 shared_informer.go:320] Caches are synced for node config
	I0731 21:36:35.950655       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:36:35.950691       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5b7e6dbba540] <==
	W0731 21:36:34.520947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:36:34.521281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 21:36:34.520959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 21:36:34.521325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 21:36:34.520971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:36:34.521358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 21:36:34.520982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 21:36:34.521400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 21:36:34.520994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:36:34.521433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 21:36:34.521005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 21:36:34.521478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 21:36:34.521016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 21:36:34.521637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 21:36:34.521063       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:36:34.521668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 21:36:34.521075       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 21:36:34.521711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 21:36:34.521086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:36:34.521744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 21:36:34.521096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:36:34.521788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 21:36:34.521107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:36:34.521823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0731 21:36:35.620396       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cef496eb0cf8] <==
	I0731 21:35:49.940373       1 serving.go:380] Generated self-signed cert in-memory
	W0731 21:35:51.471066       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 21:35:51.471131       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 21:35:51.471150       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 21:35:51.471185       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 21:35:51.500855       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 21:35:51.500948       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:35:51.501646       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 21:35:51.501716       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 21:35:51.501753       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 21:35:51.501774       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 21:35:51.601797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 21:36:18.410347       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 21:37:58 functional-430000 kubelet[6355]: I0731 21:37:58.878336    6355 scope.go:117] "RemoveContainer" containerID="f30840c134896e4506550e8330fac4f2660105ed56b76c3ecfa43fffbc77bf4c"
	Jul 31 21:37:58 functional-430000 kubelet[6355]: I0731 21:37:58.878476    6355 scope.go:117] "RemoveContainer" containerID="229e91fe0833780e35aecccc0345b2fd32526f1c6546a9d30323f7380c4c4d4e"
	Jul 31 21:37:58 functional-430000 kubelet[6355]: E0731 21:37:58.878560    6355 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-2khsn_default(e72c02f0-60b3-466c-92f5-ba57ac6e0442)\"" pod="default/hello-node-connect-6f49f58cd5-2khsn" podUID="e72c02f0-60b3-466c-92f5-ba57ac6e0442"
	Jul 31 21:37:59 functional-430000 kubelet[6355]: I0731 21:37:59.897765    6355 topology_manager.go:215] "Topology Admit Handler" podUID="1bdda2ae-158d-46ff-a78a-cefba7da761e" podNamespace="default" podName="busybox-mount"
	Jul 31 21:38:00 functional-430000 kubelet[6355]: I0731 21:38:00.008385    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1bdda2ae-158d-46ff-a78a-cefba7da761e-test-volume\") pod \"busybox-mount\" (UID: \"1bdda2ae-158d-46ff-a78a-cefba7da761e\") " pod="default/busybox-mount"
	Jul 31 21:38:00 functional-430000 kubelet[6355]: I0731 21:38:00.008407    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjcjg\" (UniqueName: \"kubernetes.io/projected/1bdda2ae-158d-46ff-a78a-cefba7da761e-kube-api-access-tjcjg\") pod \"busybox-mount\" (UID: \"1bdda2ae-158d-46ff-a78a-cefba7da761e\") " pod="default/busybox-mount"
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.124446    6355 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjcjg\" (UniqueName: \"kubernetes.io/projected/1bdda2ae-158d-46ff-a78a-cefba7da761e-kube-api-access-tjcjg\") pod \"1bdda2ae-158d-46ff-a78a-cefba7da761e\" (UID: \"1bdda2ae-158d-46ff-a78a-cefba7da761e\") "
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.124466    6355 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1bdda2ae-158d-46ff-a78a-cefba7da761e-test-volume\") pod \"1bdda2ae-158d-46ff-a78a-cefba7da761e\" (UID: \"1bdda2ae-158d-46ff-a78a-cefba7da761e\") "
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.124507    6355 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bdda2ae-158d-46ff-a78a-cefba7da761e-test-volume" (OuterVolumeSpecName: "test-volume") pod "1bdda2ae-158d-46ff-a78a-cefba7da761e" (UID: "1bdda2ae-158d-46ff-a78a-cefba7da761e"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.127238    6355 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bdda2ae-158d-46ff-a78a-cefba7da761e-kube-api-access-tjcjg" (OuterVolumeSpecName: "kube-api-access-tjcjg") pod "1bdda2ae-158d-46ff-a78a-cefba7da761e" (UID: "1bdda2ae-158d-46ff-a78a-cefba7da761e"). InnerVolumeSpecName "kube-api-access-tjcjg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.224618    6355 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tjcjg\" (UniqueName: \"kubernetes.io/projected/1bdda2ae-158d-46ff-a78a-cefba7da761e-kube-api-access-tjcjg\") on node \"functional-430000\" DevicePath \"\""
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.224629    6355 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1bdda2ae-158d-46ff-a78a-cefba7da761e-test-volume\") on node \"functional-430000\" DevicePath \"\""
	Jul 31 21:38:03 functional-430000 kubelet[6355]: I0731 21:38:03.923875    6355 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f414951d936a77609c9f5feeb9a0d83f1ec5eab52e163615b01d279d5bbcffe2"
	Jul 31 21:38:05 functional-430000 kubelet[6355]: I0731 21:38:05.330099    6355 scope.go:117] "RemoveContainer" containerID="fb2e25dfc88a6a3490c10596909732588dca76d1fc67f33161729e37fd20e3c2"
	Jul 31 21:38:05 functional-430000 kubelet[6355]: I0731 21:38:05.935569    6355 scope.go:117] "RemoveContainer" containerID="fb2e25dfc88a6a3490c10596909732588dca76d1fc67f33161729e37fd20e3c2"
	Jul 31 21:38:05 functional-430000 kubelet[6355]: I0731 21:38:05.935738    6355 scope.go:117] "RemoveContainer" containerID="da416329c2e9a71c102e841ffa21731e29af009c07a3b5e41118922495fc6cb1"
	Jul 31 21:38:05 functional-430000 kubelet[6355]: E0731 21:38:05.935822    6355 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-vn8mp_default(db26b016-be70-4b15-a759-a1c990da3063)\"" pod="default/hello-node-65f5d5cc78-vn8mp" podUID="db26b016-be70-4b15-a759-a1c990da3063"
	Jul 31 21:38:07 functional-430000 kubelet[6355]: I0731 21:38:07.994188    6355 topology_manager.go:215] "Topology Admit Handler" podUID="79da57f0-d42b-4579-9694-ff422b2287be" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-kdxxp"
	Jul 31 21:38:07 functional-430000 kubelet[6355]: E0731 21:38:07.994229    6355 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1bdda2ae-158d-46ff-a78a-cefba7da761e" containerName="mount-munger"
	Jul 31 21:38:07 functional-430000 kubelet[6355]: I0731 21:38:07.994245    6355 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bdda2ae-158d-46ff-a78a-cefba7da761e" containerName="mount-munger"
	Jul 31 21:38:08 functional-430000 kubelet[6355]: I0731 21:38:08.044622    6355 topology_manager.go:215] "Topology Admit Handler" podUID="b2dbe7df-00ad-430c-b074-a568b7d244e9" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-cd2d2"
	Jul 31 21:38:08 functional-430000 kubelet[6355]: I0731 21:38:08.154265    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrczf\" (UniqueName: \"kubernetes.io/projected/79da57f0-d42b-4579-9694-ff422b2287be-kube-api-access-lrczf\") pod \"kubernetes-dashboard-779776cb65-kdxxp\" (UID: \"79da57f0-d42b-4579-9694-ff422b2287be\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-kdxxp"
	Jul 31 21:38:08 functional-430000 kubelet[6355]: I0731 21:38:08.154290    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b2dbe7df-00ad-430c-b074-a568b7d244e9-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-cd2d2\" (UID: \"b2dbe7df-00ad-430c-b074-a568b7d244e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-cd2d2"
	Jul 31 21:38:08 functional-430000 kubelet[6355]: I0731 21:38:08.154300    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqxzg\" (UniqueName: \"kubernetes.io/projected/b2dbe7df-00ad-430c-b074-a568b7d244e9-kube-api-access-vqxzg\") pod \"dashboard-metrics-scraper-b5fc48f67-cd2d2\" (UID: \"b2dbe7df-00ad-430c-b074-a568b7d244e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-cd2d2"
	Jul 31 21:38:08 functional-430000 kubelet[6355]: I0731 21:38:08.154313    6355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/79da57f0-d42b-4579-9694-ff422b2287be-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-kdxxp\" (UID: \"79da57f0-d42b-4579-9694-ff422b2287be\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-kdxxp"
	
	
	==> storage-provisioner [6e8514724432] <==
	I0731 21:36:35.792068       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 21:36:35.796912       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [921a4b81756b] <==
	I0731 21:36:51.395186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:36:51.422065       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:36:51.422083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:37:08.809161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:37:08.809261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-430000_a2b6db84-0eba-4c1b-baa1-df36890585d3!
	I0731 21:37:08.809309       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3f71ca1-fc58-4134-9bd2-2aea7d65142d", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-430000_a2b6db84-0eba-4c1b-baa1-df36890585d3 became leader
	I0731 21:37:08.909366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-430000_a2b6db84-0eba-4c1b-baa1-df36890585d3!
	I0731 21:37:32.143499       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0731 21:37:32.143820       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    c97deaed-cc48-4b3f-b63a-8e121ec0374c 337 0 2024-07-31 21:35:31 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-31 21:35:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3dd10f9a-423d-4cef-b11a-109363376cdc &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3dd10f9a-423d-4cef-b11a-109363376cdc 668 0 2024-07-31 21:37:32 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-31 21:37:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-31 21:37:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0731 21:37:32.144791       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3dd10f9a-423d-4cef-b11a-109363376cdc" provisioned
	I0731 21:37:32.144832       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0731 21:37:32.144851       1 volume_store.go:212] Trying to save persistentvolume "pvc-3dd10f9a-423d-4cef-b11a-109363376cdc"
	I0731 21:37:32.145579       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3dd10f9a-423d-4cef-b11a-109363376cdc", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0731 21:37:32.149548       1 volume_store.go:219] persistentvolume "pvc-3dd10f9a-423d-4cef-b11a-109363376cdc" saved
	I0731 21:37:32.149680       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3dd10f9a-423d-4cef-b11a-109363376cdc", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3dd10f9a-423d-4cef-b11a-109363376cdc
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-430000 -n functional-430000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-430000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-cd2d2 kubernetes-dashboard-779776cb65-kdxxp
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-430000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-cd2d2 kubernetes-dashboard-779776cb65-kdxxp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-430000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-cd2d2 kubernetes-dashboard-779776cb65-kdxxp: exit status 1 (41.936709ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-430000/192.168.105.4
	Start Time:       Wed, 31 Jul 2024 14:37:59 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://16fc72cbc326105b9358592cf7e93f08b9147d7d74e45d0a0521d83cbadc2fc6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 31 Jul 2024 14:38:01 -0700
	      Finished:     Wed, 31 Jul 2024 14:38:01 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tjcjg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tjcjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-430000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.1s (1.1s including waiting). Image size: 3547125 bytes.
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-cd2d2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-kdxxp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-430000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-cd2d2 kubernetes-dashboard-779776cb65-kdxxp: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 node stop m02 -v=7 --alsologtostderr
E0731 14:42:31.111211    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:42:36.233317    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-875000 node stop m02 -v=7 --alsologtostderr: (12.188044875s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
E0731 14:42:46.475422    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:43:06.955869    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:43:47.917284    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:45:09.837917    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:45:18.308025    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr: exit status 7 (2m55.966705334s)

                                                
                                                
-- stdout --
	ha-875000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-875000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-875000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:42:41.022967    3360 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:42:41.023154    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:42:41.023157    3360 out.go:304] Setting ErrFile to fd 2...
	I0731 14:42:41.023160    3360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:42:41.023302    3360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:42:41.023441    3360 out.go:298] Setting JSON to false
	I0731 14:42:41.023451    3360 mustload.go:65] Loading cluster: ha-875000
	I0731 14:42:41.023524    3360 notify.go:220] Checking for updates...
	I0731 14:42:41.023679    3360 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:42:41.023688    3360 status.go:255] checking status of ha-875000 ...
	I0731 14:42:41.024460    3360 status.go:330] ha-875000 host status = "Running" (err=<nil>)
	I0731 14:42:41.024470    3360 host.go:66] Checking if "ha-875000" exists ...
	I0731 14:42:41.024560    3360 host.go:66] Checking if "ha-875000" exists ...
	I0731 14:42:41.024665    3360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:42:41.024673    3360 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/id_rsa Username:docker}
	W0731 14:43:06.950785    3360 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 14:43:06.950872    3360 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 14:43:06.950886    3360 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 14:43:06.950893    3360 status.go:257] ha-875000 status: &{Name:ha-875000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:43:06.950903    3360 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 14:43:06.950917    3360 status.go:255] checking status of ha-875000-m02 ...
	I0731 14:43:06.951138    3360 status.go:330] ha-875000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:43:06.951144    3360 status.go:343] host is not running, skipping remaining checks
	I0731 14:43:06.951147    3360 status.go:257] ha-875000-m02 status: &{Name:ha-875000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:43:06.951151    3360 status.go:255] checking status of ha-875000-m03 ...
	I0731 14:43:06.951976    3360 status.go:330] ha-875000-m03 host status = "Running" (err=<nil>)
	I0731 14:43:06.951991    3360 host.go:66] Checking if "ha-875000-m03" exists ...
	I0731 14:43:06.952292    3360 host.go:66] Checking if "ha-875000-m03" exists ...
	I0731 14:43:06.952472    3360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:43:06.952481    3360 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m03/id_rsa Username:docker}
	W0731 14:44:21.952755    3360 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 14:44:21.952805    3360 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 14:44:21.952814    3360 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 14:44:21.952818    3360 status.go:257] ha-875000-m03 status: &{Name:ha-875000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:44:21.952833    3360 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 14:44:21.952837    3360 status.go:255] checking status of ha-875000-m04 ...
	I0731 14:44:21.953601    3360 status.go:330] ha-875000-m04 host status = "Running" (err=<nil>)
	I0731 14:44:21.953611    3360 host.go:66] Checking if "ha-875000-m04" exists ...
	I0731 14:44:21.953734    3360 host.go:66] Checking if "ha-875000-m04" exists ...
	I0731 14:44:21.953858    3360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:44:21.953864    3360 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m04/id_rsa Username:docker}
	W0731 14:45:36.954572    3360 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 14:45:36.954620    3360 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 14:45:36.954629    3360 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 14:45:36.954634    3360 status.go:257] ha-875000-m04 status: &{Name:ha-875000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:45:36.954645    3360 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-875000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 3 (25.957787875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 14:46:02.911878    3390 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 14:46:02.911891    3390 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (33.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (7.085575541s)
ha_test.go:413: expected profile "ha-875000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-875000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-875000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-875000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 3 (25.962878417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 14:46:35.956432    3404 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 14:46:35.956470    3404 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (33.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.115048584s)

                                                
                                                
-- stdout --
	* Starting "ha-875000-m02" control-plane node in "ha-875000" cluster
	* Restarting existing qemu2 VM for "ha-875000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-875000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:46:36.024970    3409 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:46:36.025279    3409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:46:36.025284    3409 out.go:304] Setting ErrFile to fd 2...
	I0731 14:46:36.025287    3409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:46:36.025487    3409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:46:36.025805    3409 mustload.go:65] Loading cluster: ha-875000
	I0731 14:46:36.026122    3409 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 14:46:36.026426    3409 host.go:58] "ha-875000-m02" host status: Stopped
	I0731 14:46:36.030862    3409 out.go:177] * Starting "ha-875000-m02" control-plane node in "ha-875000" cluster
	I0731 14:46:36.034841    3409 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:46:36.034854    3409 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:46:36.034862    3409 cache.go:56] Caching tarball of preloaded images
	I0731 14:46:36.034939    3409 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 14:46:36.034947    3409 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:46:36.035021    3409 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/ha-875000/config.json ...
	I0731 14:46:36.035361    3409 start.go:360] acquireMachinesLock for ha-875000-m02: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:46:36.035408    3409 start.go:364] duration metric: took 32.291µs to acquireMachinesLock for "ha-875000-m02"
	I0731 14:46:36.035417    3409 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:46:36.035422    3409 fix.go:54] fixHost starting: m02
	I0731 14:46:36.035576    3409 fix.go:112] recreateIfNeeded on ha-875000-m02: state=Stopped err=<nil>
	W0731 14:46:36.035583    3409 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:46:36.039827    3409 out.go:177] * Restarting existing qemu2 VM for "ha-875000-m02" ...
	I0731 14:46:36.043842    3409 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:46:36.043948    3409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:41:39:a4:f9:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/disk.qcow2
	I0731 14:46:36.046961    3409 main.go:141] libmachine: STDOUT: 
	I0731 14:46:36.046983    3409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:46:36.047015    3409 fix.go:56] duration metric: took 11.592791ms for fixHost
	I0731 14:46:36.047020    3409 start.go:83] releasing machines lock for "ha-875000-m02", held for 11.608125ms
	W0731 14:46:36.047037    3409 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:46:36.047068    3409 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:46:36.047073    3409 start.go:729] Will try again in 5 seconds ...
	I0731 14:46:41.048293    3409 start.go:360] acquireMachinesLock for ha-875000-m02: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:46:41.048430    3409 start.go:364] duration metric: took 106.5µs to acquireMachinesLock for "ha-875000-m02"
	I0731 14:46:41.048463    3409 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:46:41.048468    3409 fix.go:54] fixHost starting: m02
	I0731 14:46:41.048616    3409 fix.go:112] recreateIfNeeded on ha-875000-m02: state=Stopped err=<nil>
	W0731 14:46:41.048623    3409 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:46:41.052206    3409 out.go:177] * Restarting existing qemu2 VM for "ha-875000-m02" ...
	I0731 14:46:41.056174    3409 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:46:41.056228    3409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:41:39:a4:f9:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/disk.qcow2
	I0731 14:46:41.058384    3409 main.go:141] libmachine: STDOUT: 
	I0731 14:46:41.058398    3409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:46:41.058419    3409 fix.go:56] duration metric: took 9.950791ms for fixHost
	I0731 14:46:41.058422    3409 start.go:83] releasing machines lock for "ha-875000-m02", held for 9.98575ms
	W0731 14:46:41.058459    3409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:46:41.062223    3409 out.go:177] 
	W0731 14:46:41.066318    3409 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:46:41.066323    3409 out.go:239] * 
	* 
	W0731 14:46:41.067999    3409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:46:41.072203    3409 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0731 14:46:36.024970    3409 out.go:291] Setting OutFile to fd 1 ...
I0731 14:46:36.025279    3409 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:46:36.025284    3409 out.go:304] Setting ErrFile to fd 2...
I0731 14:46:36.025287    3409 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:46:36.025487    3409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:46:36.025805    3409 mustload.go:65] Loading cluster: ha-875000
I0731 14:46:36.026122    3409 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0731 14:46:36.026426    3409 host.go:58] "ha-875000-m02" host status: Stopped
I0731 14:46:36.030862    3409 out.go:177] * Starting "ha-875000-m02" control-plane node in "ha-875000" cluster
I0731 14:46:36.034841    3409 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 14:46:36.034854    3409 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 14:46:36.034862    3409 cache.go:56] Caching tarball of preloaded images
I0731 14:46:36.034939    3409 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 14:46:36.034947    3409 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 14:46:36.035021    3409 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/ha-875000/config.json ...
I0731 14:46:36.035361    3409 start.go:360] acquireMachinesLock for ha-875000-m02: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 14:46:36.035408    3409 start.go:364] duration metric: took 32.291µs to acquireMachinesLock for "ha-875000-m02"
I0731 14:46:36.035417    3409 start.go:96] Skipping create...Using existing machine configuration
I0731 14:46:36.035422    3409 fix.go:54] fixHost starting: m02
I0731 14:46:36.035576    3409 fix.go:112] recreateIfNeeded on ha-875000-m02: state=Stopped err=<nil>
W0731 14:46:36.035583    3409 fix.go:138] unexpected machine state, will restart: <nil>
I0731 14:46:36.039827    3409 out.go:177] * Restarting existing qemu2 VM for "ha-875000-m02" ...
I0731 14:46:36.043842    3409 qemu.go:418] Using hvf for hardware acceleration
I0731 14:46:36.043948    3409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:41:39:a4:f9:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/disk.qcow2
I0731 14:46:36.046961    3409 main.go:141] libmachine: STDOUT: 
I0731 14:46:36.046983    3409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 14:46:36.047015    3409 fix.go:56] duration metric: took 11.592791ms for fixHost
I0731 14:46:36.047020    3409 start.go:83] releasing machines lock for "ha-875000-m02", held for 11.608125ms
W0731 14:46:36.047037    3409 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 14:46:36.047068    3409 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 14:46:36.047073    3409 start.go:729] Will try again in 5 seconds ...
I0731 14:46:41.048293    3409 start.go:360] acquireMachinesLock for ha-875000-m02: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 14:46:41.048430    3409 start.go:364] duration metric: took 106.5µs to acquireMachinesLock for "ha-875000-m02"
I0731 14:46:41.048463    3409 start.go:96] Skipping create...Using existing machine configuration
I0731 14:46:41.048468    3409 fix.go:54] fixHost starting: m02
I0731 14:46:41.048616    3409 fix.go:112] recreateIfNeeded on ha-875000-m02: state=Stopped err=<nil>
W0731 14:46:41.048623    3409 fix.go:138] unexpected machine state, will restart: <nil>
I0731 14:46:41.052206    3409 out.go:177] * Restarting existing qemu2 VM for "ha-875000-m02" ...
I0731 14:46:41.056174    3409 qemu.go:418] Using hvf for hardware acceleration
I0731 14:46:41.056228    3409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:41:39:a4:f9:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m02/disk.qcow2
I0731 14:46:41.058384    3409 main.go:141] libmachine: STDOUT: 
I0731 14:46:41.058398    3409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 14:46:41.058419    3409 fix.go:56] duration metric: took 9.950791ms for fixHost
I0731 14:46:41.058422    3409 start.go:83] releasing machines lock for "ha-875000-m02", held for 9.98575ms
W0731 14:46:41.058459    3409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 14:46:41.062223    3409 out.go:177] 
W0731 14:46:41.066318    3409 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 14:46:41.066323    3409 out.go:239] * 
* 
W0731 14:46:41.067999    3409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 14:46:41.072203    3409 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-875000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
E0731 14:47:25.976289    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:47:53.677553    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr: exit status 7 (2m57.951556667s)

                                                
                                                
-- stdout --
	ha-875000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-875000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-875000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:46:41.108368    3413 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:46:41.108527    3413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:46:41.108531    3413 out.go:304] Setting ErrFile to fd 2...
	I0731 14:46:41.108533    3413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:46:41.108673    3413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:46:41.108790    3413 out.go:298] Setting JSON to false
	I0731 14:46:41.108800    3413 mustload.go:65] Loading cluster: ha-875000
	I0731 14:46:41.108864    3413 notify.go:220] Checking for updates...
	I0731 14:46:41.109025    3413 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:46:41.109031    3413 status.go:255] checking status of ha-875000 ...
	I0731 14:46:41.109753    3413 status.go:330] ha-875000 host status = "Running" (err=<nil>)
	I0731 14:46:41.109763    3413 host.go:66] Checking if "ha-875000" exists ...
	I0731 14:46:41.109873    3413 host.go:66] Checking if "ha-875000" exists ...
	I0731 14:46:41.109989    3413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:46:41.109996    3413 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/id_rsa Username:docker}
	W0731 14:46:41.110183    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 14:46:41.110200    3413 retry.go:31] will retry after 173.463693ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 14:46:41.285731    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 14:46:41.285749    3413 retry.go:31] will retry after 528.133803ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 14:46:41.816099    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 14:46:41.816127    3413 retry.go:31] will retry after 619.846701ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 14:46:42.438150    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 14:46:42.438173    3413 retry.go:31] will retry after 662.249447ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 14:47:09.023870    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 14:47:09.023957    3413 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 14:47:09.023966    3413 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 14:47:09.023970    3413 status.go:257] ha-875000 status: &{Name:ha-875000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:47:09.023982    3413 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 14:47:09.023986    3413 status.go:255] checking status of ha-875000-m02 ...
	I0731 14:47:09.024214    3413 status.go:330] ha-875000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:47:09.024219    3413 status.go:343] host is not running, skipping remaining checks
	I0731 14:47:09.024221    3413 status.go:257] ha-875000-m02 status: &{Name:ha-875000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:47:09.024226    3413 status.go:255] checking status of ha-875000-m03 ...
	I0731 14:47:09.024836    3413 status.go:330] ha-875000-m03 host status = "Running" (err=<nil>)
	I0731 14:47:09.024841    3413 host.go:66] Checking if "ha-875000-m03" exists ...
	I0731 14:47:09.024932    3413 host.go:66] Checking if "ha-875000-m03" exists ...
	I0731 14:47:09.025053    3413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:47:09.025059    3413 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m03/id_rsa Username:docker}
	W0731 14:48:24.023533    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 14:48:24.023576    3413 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 14:48:24.023586    3413 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 14:48:24.023598    3413 status.go:257] ha-875000-m03 status: &{Name:ha-875000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:48:24.023621    3413 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 14:48:24.023625    3413 status.go:255] checking status of ha-875000-m04 ...
	I0731 14:48:24.024302    3413 status.go:330] ha-875000-m04 host status = "Running" (err=<nil>)
	I0731 14:48:24.024311    3413 host.go:66] Checking if "ha-875000-m04" exists ...
	I0731 14:48:24.024405    3413 host.go:66] Checking if "ha-875000-m04" exists ...
	I0731 14:48:24.024515    3413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:48:24.024520    3413 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000-m04/id_rsa Username:docker}
	W0731 14:49:39.024549    3413 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 14:49:39.024595    3413 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 14:49:39.024603    3413 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 14:49:39.024606    3413 status.go:257] ha-875000-m04 status: &{Name:ha-875000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 14:49:39.024615    3413 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 3 (25.958094958s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 14:50:04.982508    3446 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 14:50:04.982516    3446 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-875000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-875000 -v=7 --alsologtostderr
E0731 14:51:41.376045    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:52:25.970735    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-875000 -v=7 --alsologtostderr: (3m49.033274166s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-875000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-875000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231546667s)

                                                
                                                
-- stdout --
	* [ha-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-875000" primary control-plane node in "ha-875000" cluster
	* Restarting existing qemu2 VM for "ha-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:55:12.104595    3532 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:55:12.104765    3532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:12.104770    3532 out.go:304] Setting ErrFile to fd 2...
	I0731 14:55:12.104774    3532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:12.104949    3532 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:55:12.106305    3532 out.go:298] Setting JSON to false
	I0731 14:55:12.126417    3532 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3276,"bootTime":1722459636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:55:12.126504    3532 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:55:12.132040    3532 out.go:177] * [ha-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:55:12.138969    3532 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:55:12.139004    3532 notify.go:220] Checking for updates...
	I0731 14:55:12.147827    3532 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:55:12.150980    3532 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:55:12.153990    3532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:55:12.157059    3532 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:55:12.159970    3532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:55:12.163316    3532 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:55:12.163376    3532 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:55:12.167999    3532 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 14:55:12.175017    3532 start.go:297] selected driver: qemu2
	I0731 14:55:12.175024    3532 start.go:901] validating driver "qemu2" against &{Name:ha-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-875000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:55:12.175119    3532 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:55:12.177713    3532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 14:55:12.177765    3532 cni.go:84] Creating CNI manager for ""
	I0731 14:55:12.177771    3532 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 14:55:12.177818    3532 start.go:340] cluster config:
	{Name:ha-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-875000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:55:12.182023    3532 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:55:12.189997    3532 out.go:177] * Starting "ha-875000" primary control-plane node in "ha-875000" cluster
	I0731 14:55:12.193996    3532 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:55:12.194008    3532 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:55:12.194018    3532 cache.go:56] Caching tarball of preloaded images
	I0731 14:55:12.194068    3532 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 14:55:12.194075    3532 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:55:12.194132    3532 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/ha-875000/config.json ...
	I0731 14:55:12.194714    3532 start.go:360] acquireMachinesLock for ha-875000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:55:12.194748    3532 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "ha-875000"
	I0731 14:55:12.194757    3532 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:55:12.194765    3532 fix.go:54] fixHost starting: 
	I0731 14:55:12.194881    3532 fix.go:112] recreateIfNeeded on ha-875000: state=Stopped err=<nil>
	W0731 14:55:12.194889    3532 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:55:12.199057    3532 out.go:177] * Restarting existing qemu2 VM for "ha-875000" ...
	I0731 14:55:12.206898    3532 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:55:12.206934    3532 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c3:ec:20:10:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/disk.qcow2
	I0731 14:55:12.208935    3532 main.go:141] libmachine: STDOUT: 
	I0731 14:55:12.208958    3532 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:55:12.208984    3532 fix.go:56] duration metric: took 14.220792ms for fixHost
	I0731 14:55:12.208989    3532 start.go:83] releasing machines lock for "ha-875000", held for 14.237208ms
	W0731 14:55:12.208995    3532 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:55:12.209025    3532 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:55:12.209029    3532 start.go:729] Will try again in 5 seconds ...
	I0731 14:55:17.211130    3532 start.go:360] acquireMachinesLock for ha-875000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:55:17.211628    3532 start.go:364] duration metric: took 374.541µs to acquireMachinesLock for "ha-875000"
	I0731 14:55:17.211752    3532 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:55:17.211772    3532 fix.go:54] fixHost starting: 
	I0731 14:55:17.212466    3532 fix.go:112] recreateIfNeeded on ha-875000: state=Stopped err=<nil>
	W0731 14:55:17.212492    3532 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:55:17.219919    3532 out.go:177] * Restarting existing qemu2 VM for "ha-875000" ...
	I0731 14:55:17.223835    3532 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:55:17.224089    3532 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c3:ec:20:10:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/disk.qcow2
	I0731 14:55:17.232951    3532 main.go:141] libmachine: STDOUT: 
	I0731 14:55:17.233008    3532 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:55:17.233068    3532 fix.go:56] duration metric: took 21.297834ms for fixHost
	I0731 14:55:17.233088    3532 start.go:83] releasing machines lock for "ha-875000", held for 21.434917ms
	W0731 14:55:17.233239    3532 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:55:17.241821    3532 out.go:177] 
	W0731 14:55:17.245940    3532 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:55:17.245976    3532 out.go:239] * 
	* 
	W0731 14:55:17.248716    3532 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:55:17.260840    3532 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-875000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-875000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (33.326125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.058834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-875000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-875000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:55:17.401845    3544 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:55:17.402147    3544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:17.402151    3544 out.go:304] Setting ErrFile to fd 2...
	I0731 14:55:17.402153    3544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:17.402277    3544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:55:17.402491    3544 mustload.go:65] Loading cluster: ha-875000
	I0731 14:55:17.402708    3544 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 14:55:17.403034    3544 out.go:239] ! The control-plane node ha-875000 host is not running (will try others): state=Stopped
	! The control-plane node ha-875000 host is not running (will try others): state=Stopped
	W0731 14:55:17.403153    3544 out.go:239] ! The control-plane node ha-875000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-875000-m02 host is not running (will try others): state=Stopped
	I0731 14:55:17.407296    3544 out.go:177] * The control-plane node ha-875000-m03 host is not running: state=Stopped
	I0731 14:55:17.410305    3544 out.go:177]   To start a cluster, run: "minikube start -p ha-875000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-875000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr: exit status 7 (29.26925ms)

                                                
                                                
-- stdout --
	ha-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:55:17.441443    3546 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:55:17.441599    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:17.441602    3546 out.go:304] Setting ErrFile to fd 2...
	I0731 14:55:17.441604    3546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:55:17.441731    3546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:55:17.441843    3546 out.go:298] Setting JSON to false
	I0731 14:55:17.441852    3546 mustload.go:65] Loading cluster: ha-875000
	I0731 14:55:17.441903    3546 notify.go:220] Checking for updates...
	I0731 14:55:17.442121    3546 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:55:17.442128    3546 status.go:255] checking status of ha-875000 ...
	I0731 14:55:17.442327    3546 status.go:330] ha-875000 host status = "Stopped" (err=<nil>)
	I0731 14:55:17.442330    3546 status.go:343] host is not running, skipping remaining checks
	I0731 14:55:17.442332    3546 status.go:257] ha-875000 status: &{Name:ha-875000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:55:17.442345    3546 status.go:255] checking status of ha-875000-m02 ...
	I0731 14:55:17.442430    3546 status.go:330] ha-875000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:55:17.442433    3546 status.go:343] host is not running, skipping remaining checks
	I0731 14:55:17.442435    3546 status.go:257] ha-875000-m02 status: &{Name:ha-875000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:55:17.442439    3546 status.go:255] checking status of ha-875000-m03 ...
	I0731 14:55:17.442521    3546 status.go:330] ha-875000-m03 host status = "Stopped" (err=<nil>)
	I0731 14:55:17.442524    3546 status.go:343] host is not running, skipping remaining checks
	I0731 14:55:17.442525    3546 status.go:257] ha-875000-m03 status: &{Name:ha-875000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:55:17.442529    3546 status.go:255] checking status of ha-875000-m04 ...
	I0731 14:55:17.442632    3546 status.go:330] ha-875000-m04 host status = "Stopped" (err=<nil>)
	I0731 14:55:17.442636    3546 status.go:343] host is not running, skipping remaining checks
	I0731 14:55:17.442638    3546 status.go:257] ha-875000-m04 status: &{Name:ha-875000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (29.960042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0731 14:55:18.295942    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
ha_test.go:413: expected profile "ha-875000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-875000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-875000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-875000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (55.614917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 stop -v=7 --alsologtostderr
E0731 14:57:25.965641    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-875000 stop -v=7 --alsologtostderr: (3m21.984898833s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr: exit status 7 (61.870583ms)

                                                
                                                
-- stdout --
	ha-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-875000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:58:40.548292    3618 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:58:40.548503    3618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:40.548507    3618 out.go:304] Setting ErrFile to fd 2...
	I0731 14:58:40.548511    3618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:40.548677    3618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:58:40.548836    3618 out.go:298] Setting JSON to false
	I0731 14:58:40.548848    3618 mustload.go:65] Loading cluster: ha-875000
	I0731 14:58:40.548901    3618 notify.go:220] Checking for updates...
	I0731 14:58:40.549201    3618 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:58:40.549214    3618 status.go:255] checking status of ha-875000 ...
	I0731 14:58:40.549488    3618 status.go:330] ha-875000 host status = "Stopped" (err=<nil>)
	I0731 14:58:40.549492    3618 status.go:343] host is not running, skipping remaining checks
	I0731 14:58:40.549495    3618 status.go:257] ha-875000 status: &{Name:ha-875000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:58:40.549508    3618 status.go:255] checking status of ha-875000-m02 ...
	I0731 14:58:40.549643    3618 status.go:330] ha-875000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:58:40.549648    3618 status.go:343] host is not running, skipping remaining checks
	I0731 14:58:40.549651    3618 status.go:257] ha-875000-m02 status: &{Name:ha-875000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:58:40.549656    3618 status.go:255] checking status of ha-875000-m03 ...
	I0731 14:58:40.549787    3618 status.go:330] ha-875000-m03 host status = "Stopped" (err=<nil>)
	I0731 14:58:40.549792    3618 status.go:343] host is not running, skipping remaining checks
	I0731 14:58:40.549794    3618 status.go:257] ha-875000-m03 status: &{Name:ha-875000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:58:40.549799    3618 status.go:255] checking status of ha-875000-m04 ...
	I0731 14:58:40.549922    3618 status.go:330] ha-875000-m04 host status = "Stopped" (err=<nil>)
	I0731 14:58:40.549925    3618 status.go:343] host is not running, skipping remaining checks
	I0731 14:58:40.549928    3618 status.go:257] ha-875000-m04 status: &{Name:ha-875000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr": ha-875000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-875000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (31.279958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-875000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-875000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.179926125s)

                                                
                                                
-- stdout --
	* [ha-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-875000" primary control-plane node in "ha-875000" cluster
	* Restarting existing qemu2 VM for "ha-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-875000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:58:40.610373    3622 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:58:40.610522    3622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:40.610525    3622 out.go:304] Setting ErrFile to fd 2...
	I0731 14:58:40.610528    3622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:40.610667    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:58:40.611704    3622 out.go:298] Setting JSON to false
	I0731 14:58:40.627652    3622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3484,"bootTime":1722459636,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:58:40.627727    3622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:58:40.632910    3622 out.go:177] * [ha-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:58:40.639795    3622 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:58:40.639872    3622 notify.go:220] Checking for updates...
	I0731 14:58:40.645165    3622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:58:40.647802    3622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:58:40.650824    3622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:58:40.653851    3622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:58:40.656774    3622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:58:40.660067    3622 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:58:40.660317    3622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:58:40.664741    3622 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 14:58:40.671806    3622 start.go:297] selected driver: qemu2
	I0731 14:58:40.671812    3622 start.go:901] validating driver "qemu2" against &{Name:ha-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-875000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:58:40.671892    3622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:58:40.674057    3622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 14:58:40.674095    3622 cni.go:84] Creating CNI manager for ""
	I0731 14:58:40.674099    3622 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 14:58:40.674146    3622 start.go:340] cluster config:
	{Name:ha-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-875000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:58:40.677562    3622 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:58:40.685751    3622 out.go:177] * Starting "ha-875000" primary control-plane node in "ha-875000" cluster
	I0731 14:58:40.689830    3622 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:58:40.689846    3622 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:58:40.689860    3622 cache.go:56] Caching tarball of preloaded images
	I0731 14:58:40.689924    3622 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 14:58:40.689934    3622 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:58:40.690013    3622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/ha-875000/config.json ...
	I0731 14:58:40.690445    3622 start.go:360] acquireMachinesLock for ha-875000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:58:40.690480    3622 start.go:364] duration metric: took 28.709µs to acquireMachinesLock for "ha-875000"
	I0731 14:58:40.690490    3622 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:58:40.690495    3622 fix.go:54] fixHost starting: 
	I0731 14:58:40.690613    3622 fix.go:112] recreateIfNeeded on ha-875000: state=Stopped err=<nil>
	W0731 14:58:40.690622    3622 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:58:40.694822    3622 out.go:177] * Restarting existing qemu2 VM for "ha-875000" ...
	I0731 14:58:40.702725    3622 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:58:40.702765    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c3:ec:20:10:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/disk.qcow2
	I0731 14:58:40.704753    3622 main.go:141] libmachine: STDOUT: 
	I0731 14:58:40.704773    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:58:40.704814    3622 fix.go:56] duration metric: took 14.319459ms for fixHost
	I0731 14:58:40.704819    3622 start.go:83] releasing machines lock for "ha-875000", held for 14.334708ms
	W0731 14:58:40.704827    3622 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:58:40.704869    3622 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:58:40.704873    3622 start.go:729] Will try again in 5 seconds ...
	I0731 14:58:45.706991    3622 start.go:360] acquireMachinesLock for ha-875000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:58:45.707505    3622 start.go:364] duration metric: took 404.167µs to acquireMachinesLock for "ha-875000"
	I0731 14:58:45.707806    3622 start.go:96] Skipping create...Using existing machine configuration
	I0731 14:58:45.707829    3622 fix.go:54] fixHost starting: 
	I0731 14:58:45.708534    3622 fix.go:112] recreateIfNeeded on ha-875000: state=Stopped err=<nil>
	W0731 14:58:45.708559    3622 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 14:58:45.712825    3622 out.go:177] * Restarting existing qemu2 VM for "ha-875000" ...
	I0731 14:58:45.719971    3622 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:58:45.720283    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c3:ec:20:10:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/ha-875000/disk.qcow2
	I0731 14:58:45.729111    3622 main.go:141] libmachine: STDOUT: 
	I0731 14:58:45.729187    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:58:45.729269    3622 fix.go:56] duration metric: took 21.442792ms for fixHost
	I0731 14:58:45.729291    3622 start.go:83] releasing machines lock for "ha-875000", held for 21.594584ms
	W0731 14:58:45.729533    3622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-875000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:58:45.736948    3622 out.go:177] 
	W0731 14:58:45.741044    3622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:58:45.741068    3622 out.go:239] * 
	* 
	W0731 14:58:45.743622    3622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:58:45.750937    3622 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-875000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (68.546042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-875000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-875000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-875000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-875000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (28.805833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-875000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-875000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.880667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-875000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-875000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:58:45.938367    3640 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:58:45.938482    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:45.938485    3640 out.go:304] Setting ErrFile to fd 2...
	I0731 14:58:45.938487    3640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:58:45.938616    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:58:45.938842    3640 mustload.go:65] Loading cluster: ha-875000
	I0731 14:58:45.939062    3640 config.go:182] Loaded profile config "ha-875000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 14:58:45.939366    3640 out.go:239] ! The control-plane node ha-875000 host is not running (will try others): state=Stopped
	! The control-plane node ha-875000 host is not running (will try others): state=Stopped
	W0731 14:58:45.939469    3640 out.go:239] ! The control-plane node ha-875000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-875000-m02 host is not running (will try others): state=Stopped
	I0731 14:58:45.943941    3640 out.go:177] * The control-plane node ha-875000-m03 host is not running: state=Stopped
	I0731 14:58:45.947876    3640 out.go:177]   To start a cluster, run: "minikube start -p ha-875000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-875000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-875000 -n ha-875000: exit status 7 (28.383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-875000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-653000 --driver=qemu2 
E0731 14:58:49.027464    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-653000 --driver=qemu2 : exit status 80 (9.923516s)

                                                
                                                
-- stdout --
	* [image-653000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-653000" primary control-plane node in "image-653000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-653000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-653000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-653000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-653000 -n image-653000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-653000 -n image-653000: exit status 7 (66.706917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-653000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-338000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-338000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.018178875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4cbc09df-5e2e-45be-83a6-f5d1c1636295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-338000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3af79eca-e6b5-4cf6-80ca-2b76e998ece3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"75b904c1-2ea9-4bc5-a0e5-8fd016c2aea2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig"}}
	{"specversion":"1.0","id":"f1627163-ac88-4b11-b572-21c9ad8e5d28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"66e22c87-a677-4171-962a-967bfa941d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1747b6ae-be59-411c-974f-301c91c0e0a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube"}}
	{"specversion":"1.0","id":"03d1ce24-9960-496f-892c-e2fcff68eec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2aa4d6d6-7170-4815-acae-f56dfe6d0e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd016dbe-45d0-43a1-8fd7-05bb02acea64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"40ba3c2e-9dc2-45ca-a5ca-ed5bd79f4ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-338000\" primary control-plane node in \"json-output-338000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"20aedb96-5696-4533-be89-ac2fd967b014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"da69add7-89bf-4c84-8b25-34c3ece29904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-338000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"42fa78af-b2a2-4c70-9f21-7d080afe9d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a54ff92d-914a-4e83-aec3-56697d3a82a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d964ed80-5359-4545-9ffc-37045e9c6953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-338000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7a2d2931-e03f-4d5b-a495-54000f5b9cf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bcbfde23-0ce8-4b1e-b09a-bbc72d42c685","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-338000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.02s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-338000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-338000 --output=json --user=testUser: exit status 83 (76.203916ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51a60a11-14df-4e07-b496-ce7991606c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-338000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"7229ddce-b424-4499-a5ef-238ea34ba9aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-338000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-338000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-338000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-338000 --output=json --user=testUser: exit status 83 (42.322792ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-338000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-338000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-338000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-338000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-887000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-887000 --driver=qemu2 : exit status 80 (9.809245583s)

                                                
                                                
-- stdout --
	* [first-887000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-887000" primary control-plane node in "first-887000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-887000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-887000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 14:59:18.964597 -0700 PDT m=+1990.404397917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-889000 -n second-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-889000 -n second-889000: exit status 85 (78.967625ms)

                                                
                                                
-- stdout --
	* Profile "second-889000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-889000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-889000" host is not running, skipping log retrieval (state="* Profile \"second-889000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-889000\"")
helpers_test.go:175: Cleaning up "second-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-889000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 14:59:19.15069 -0700 PDT m=+1990.590494001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-887000 -n first-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-887000 -n first-887000: exit status 7 (29.443375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-887000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-887000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-887000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.017696875s)

                                                
                                                
-- stdout --
	* [mount-start-1-697000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-697000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-697000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-697000 -n mount-start-1-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-697000 -n mount-start-1-697000: exit status 7 (66.88175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-697000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-740000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-740000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.811706333s)

                                                
                                                
-- stdout --
	* [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:59:29.548387    3782 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:59:29.548540    3782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:59:29.548543    3782 out.go:304] Setting ErrFile to fd 2...
	I0731 14:59:29.548546    3782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:59:29.548693    3782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:59:29.549757    3782 out.go:298] Setting JSON to false
	I0731 14:59:29.565950    3782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3533,"bootTime":1722459636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:59:29.566018    3782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:59:29.573037    3782 out.go:177] * [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:59:29.580955    3782 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:59:29.580984    3782 notify.go:220] Checking for updates...
	I0731 14:59:29.588911    3782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:59:29.591915    3782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:59:29.594970    3782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:59:29.596342    3782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:59:29.598925    3782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:59:29.602140    3782 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:59:29.605771    3782 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 14:59:29.612984    3782 start.go:297] selected driver: qemu2
	I0731 14:59:29.612991    3782 start.go:901] validating driver "qemu2" against <nil>
	I0731 14:59:29.612998    3782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:59:29.615442    3782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:59:29.618940    3782 out.go:177] * Automatically selected the socket_vmnet network
	I0731 14:59:29.622026    3782 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 14:59:29.622074    3782 cni.go:84] Creating CNI manager for ""
	I0731 14:59:29.622080    3782 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 14:59:29.622089    3782 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 14:59:29.622116    3782 start.go:340] cluster config:
	{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:59:29.625987    3782 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:59:29.632916    3782 out.go:177] * Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	I0731 14:59:29.636922    3782 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:59:29.636938    3782 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:59:29.636952    3782 cache.go:56] Caching tarball of preloaded images
	I0731 14:59:29.637012    3782 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 14:59:29.637022    3782 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:59:29.637253    3782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/multinode-740000/config.json ...
	I0731 14:59:29.637264    3782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/multinode-740000/config.json: {Name:mk0b707269d53ce0bf6e753070c4961209a46647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:59:29.637702    3782 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:59:29.637741    3782 start.go:364] duration metric: took 32.334µs to acquireMachinesLock for "multinode-740000"
	I0731 14:59:29.637755    3782 start.go:93] Provisioning new machine with config: &{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 14:59:29.637788    3782 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 14:59:29.645940    3782 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 14:59:29.663890    3782 start.go:159] libmachine.API.Create for "multinode-740000" (driver="qemu2")
	I0731 14:59:29.663915    3782 client.go:168] LocalClient.Create starting
	I0731 14:59:29.663987    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 14:59:29.664017    3782 main.go:141] libmachine: Decoding PEM data...
	I0731 14:59:29.664027    3782 main.go:141] libmachine: Parsing certificate...
	I0731 14:59:29.664069    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 14:59:29.664096    3782 main.go:141] libmachine: Decoding PEM data...
	I0731 14:59:29.664103    3782 main.go:141] libmachine: Parsing certificate...
	I0731 14:59:29.664483    3782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 14:59:29.815150    3782 main.go:141] libmachine: Creating SSH key...
	I0731 14:59:29.844243    3782 main.go:141] libmachine: Creating Disk image...
	I0731 14:59:29.844249    3782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 14:59:29.844438    3782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:29.853732    3782 main.go:141] libmachine: STDOUT: 
	I0731 14:59:29.853753    3782 main.go:141] libmachine: STDERR: 
	I0731 14:59:29.853807    3782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2 +20000M
	I0731 14:59:29.861727    3782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 14:59:29.861746    3782 main.go:141] libmachine: STDERR: 
	I0731 14:59:29.861758    3782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:29.861763    3782 main.go:141] libmachine: Starting QEMU VM...
	I0731 14:59:29.861773    3782 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:59:29.861803    3782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:f6:7d:eb:06:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:29.863429    3782 main.go:141] libmachine: STDOUT: 
	I0731 14:59:29.863446    3782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:59:29.863463    3782 client.go:171] duration metric: took 199.540791ms to LocalClient.Create
	I0731 14:59:31.865613    3782 start.go:128] duration metric: took 2.227843917s to createHost
	I0731 14:59:31.865703    3782 start.go:83] releasing machines lock for "multinode-740000", held for 2.227972916s
	W0731 14:59:31.865759    3782 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:59:31.879159    3782 out.go:177] * Deleting "multinode-740000" in qemu2 ...
	W0731 14:59:31.907651    3782 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:59:31.907690    3782 start.go:729] Will try again in 5 seconds ...
	I0731 14:59:36.909786    3782 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 14:59:36.910289    3782 start.go:364] duration metric: took 389.458µs to acquireMachinesLock for "multinode-740000"
	I0731 14:59:36.910423    3782 start.go:93] Provisioning new machine with config: &{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 14:59:36.910714    3782 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 14:59:36.925271    3782 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 14:59:36.975273    3782 start.go:159] libmachine.API.Create for "multinode-740000" (driver="qemu2")
	I0731 14:59:36.975321    3782 client.go:168] LocalClient.Create starting
	I0731 14:59:36.975441    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 14:59:36.975509    3782 main.go:141] libmachine: Decoding PEM data...
	I0731 14:59:36.975524    3782 main.go:141] libmachine: Parsing certificate...
	I0731 14:59:36.975600    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 14:59:36.975643    3782 main.go:141] libmachine: Decoding PEM data...
	I0731 14:59:36.975654    3782 main.go:141] libmachine: Parsing certificate...
	I0731 14:59:36.976155    3782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 14:59:37.138203    3782 main.go:141] libmachine: Creating SSH key...
	I0731 14:59:37.265619    3782 main.go:141] libmachine: Creating Disk image...
	I0731 14:59:37.265624    3782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 14:59:37.265823    3782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:37.275027    3782 main.go:141] libmachine: STDOUT: 
	I0731 14:59:37.275051    3782 main.go:141] libmachine: STDERR: 
	I0731 14:59:37.275111    3782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2 +20000M
	I0731 14:59:37.282928    3782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 14:59:37.282955    3782 main.go:141] libmachine: STDERR: 
	I0731 14:59:37.282974    3782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:37.282977    3782 main.go:141] libmachine: Starting QEMU VM...
	I0731 14:59:37.282989    3782 qemu.go:418] Using hvf for hardware acceleration
	I0731 14:59:37.283020    3782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:61:10:2d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 14:59:37.284676    3782 main.go:141] libmachine: STDOUT: 
	I0731 14:59:37.284693    3782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 14:59:37.284712    3782 client.go:171] duration metric: took 309.392792ms to LocalClient.Create
	I0731 14:59:39.286861    3782 start.go:128] duration metric: took 2.376162417s to createHost
	I0731 14:59:39.286923    3782 start.go:83] releasing machines lock for "multinode-740000", held for 2.376652125s
	W0731 14:59:39.287269    3782 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 14:59:39.297834    3782 out.go:177] 
	W0731 14:59:39.307978    3782 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 14:59:39.308014    3782 out.go:239] * 
	* 
	W0731 14:59:39.310726    3782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:59:39.317881    3782 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-740000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (66.230792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.2085ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-740000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- rollout status deployment/busybox: exit status 1 (56.097292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.268875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.033084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.567875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.7075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.085ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.505958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.351958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.201958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 15:00:18.291905    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.741458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.917875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.86725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.388ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.193334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.684583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.984709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (28.920792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-740000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.640625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.277083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-740000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-740000 -v 3 --alsologtostderr: exit status 83 (43.98175ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-740000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:35.366090    4204 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:35.366234    4204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.366237    4204 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:35.366239    4204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.366369    4204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:35.366600    4204 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:35.366794    4204 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:35.371657    4204 out.go:177] * The control-plane node multinode-740000 host is not running: state=Stopped
	I0731 15:01:35.376578    4204 out.go:177]   To start a cluster, run: "minikube start -p multinode-740000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-740000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.510584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-740000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-740000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.888875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-740000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-740000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-740000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.7055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-740000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-740000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-740000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-740000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (28.530209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status --output json --alsologtostderr: exit status 7 (28.56225ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-740000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:35.571304    4216 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:35.571437    4216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.571444    4216 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:35.571446    4216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.571568    4216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:35.571682    4216 out.go:298] Setting JSON to true
	I0731 15:01:35.571691    4216 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:35.571757    4216 notify.go:220] Checking for updates...
	I0731 15:01:35.571889    4216 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:35.571895    4216 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:35.572091    4216 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:35.572095    4216 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:35.572097    4216 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-740000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.46675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 node stop m03: exit status 85 (42.280792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-740000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status: exit status 7 (29.601625ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr: exit status 7 (29.415083ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:35.702968    4224 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:35.703127    4224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.703131    4224 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:35.703133    4224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.703258    4224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:35.703380    4224 out.go:298] Setting JSON to false
	I0731 15:01:35.703389    4224 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:35.703457    4224 notify.go:220] Checking for updates...
	I0731 15:01:35.703578    4224 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:35.703584    4224 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:35.703793    4224 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:35.703796    4224 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:35.703798    4224 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr": multinode-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (28.909167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.490709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:35.761989    4228 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:35.762222    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.762225    4228 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:35.762227    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.762367    4228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:35.762611    4228 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:35.762798    4228 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:35.767590    4228 out.go:177] 
	W0731 15:01:35.770572    4228 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 15:01:35.770576    4228 out.go:239] * 
	* 
	W0731 15:01:35.772193    4228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:01:35.775531    4228 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 15:01:35.761989    4228 out.go:291] Setting OutFile to fd 1 ...
I0731 15:01:35.762222    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 15:01:35.762225    4228 out.go:304] Setting ErrFile to fd 2...
I0731 15:01:35.762227    4228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 15:01:35.762367    4228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 15:01:35.762611    4228 mustload.go:65] Loading cluster: multinode-740000
I0731 15:01:35.762798    4228 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 15:01:35.767590    4228 out.go:177] 
W0731 15:01:35.770572    4228 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 15:01:35.770576    4228 out.go:239] * 
* 
W0731 15:01:35.772193    4228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 15:01:35.775531    4228 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-740000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (29.498917ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:35.808318    4230 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:35.808458    4230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.808461    4230 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:35.808463    4230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:35.808587    4230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:35.808712    4230 out.go:298] Setting JSON to false
	I0731 15:01:35.808725    4230 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:35.808781    4230 notify.go:220] Checking for updates...
	I0731 15:01:35.808941    4230 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:35.808947    4230 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:35.809161    4230 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:35.809165    4230 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:35.809167    4230 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (72.550209ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:36.705167    4232 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:36.705395    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:36.705399    4232 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:36.705403    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:36.705580    4232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:36.705720    4232 out.go:298] Setting JSON to false
	I0731 15:01:36.705731    4232 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:36.705767    4232 notify.go:220] Checking for updates...
	I0731 15:01:36.705993    4232 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:36.706000    4232 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:36.706274    4232 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:36.706278    4232 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:36.706281    4232 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (72.618708ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:38.114700    4234 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:38.114900    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:38.114904    4234 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:38.114907    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:38.115063    4234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:38.115227    4234 out.go:298] Setting JSON to false
	I0731 15:01:38.115239    4234 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:38.115277    4234 notify.go:220] Checking for updates...
	I0731 15:01:38.115517    4234 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:38.115524    4234 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:38.115797    4234 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:38.115802    4234 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:38.115805    4234 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (71.749125ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:40.523064    4236 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:40.523282    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:40.523287    4236 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:40.523291    4236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:40.523470    4236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:40.523635    4236 out.go:298] Setting JSON to false
	I0731 15:01:40.523648    4236 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:40.523689    4236 notify.go:220] Checking for updates...
	I0731 15:01:40.523932    4236 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:40.523940    4236 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:40.524241    4236 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:40.524247    4236 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:40.524250    4236 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (71.116625ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:43.934308    4238 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:43.934510    4238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:43.934515    4238 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:43.934518    4238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:43.934692    4238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:43.934850    4238 out.go:298] Setting JSON to false
	I0731 15:01:43.934862    4238 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:43.934906    4238 notify.go:220] Checking for updates...
	I0731 15:01:43.935118    4238 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:43.935126    4238 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:43.935406    4238 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:43.935411    4238 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:43.935414    4238 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (73.431125ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:47.476977    4243 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:47.477168    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:47.477173    4243 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:47.477176    4243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:47.477325    4243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:47.477494    4243 out.go:298] Setting JSON to false
	I0731 15:01:47.477513    4243 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:47.477558    4243 notify.go:220] Checking for updates...
	I0731 15:01:47.477797    4243 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:47.477805    4243 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:47.478059    4243 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:47.478064    4243 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:47.478066    4243 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (72.146125ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:01:54.396103    4247 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:01:54.396318    4247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:54.396323    4247 out.go:304] Setting ErrFile to fd 2...
	I0731 15:01:54.396327    4247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:01:54.396528    4247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:01:54.396689    4247 out.go:298] Setting JSON to false
	I0731 15:01:54.396701    4247 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:01:54.396728    4247 notify.go:220] Checking for updates...
	I0731 15:01:54.396954    4247 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:01:54.396961    4247 status.go:255] checking status of multinode-740000 ...
	I0731 15:01:54.397212    4247 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:01:54.397217    4247 status.go:343] host is not running, skipping remaining checks
	I0731 15:01:54.397220    4247 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (71.542958ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:07.774266    4251 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:07.774539    4251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:07.774544    4251 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:07.774548    4251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:07.774743    4251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:07.774905    4251 out.go:298] Setting JSON to false
	I0731 15:02:07.774929    4251 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:02:07.774972    4251 notify.go:220] Checking for updates...
	I0731 15:02:07.775185    4251 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:07.775192    4251 status.go:255] checking status of multinode-740000 ...
	I0731 15:02:07.775466    4251 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:02:07.775471    4251 status.go:343] host is not running, skipping remaining checks
	I0731 15:02:07.775474    4251 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr: exit status 7 (72.069458ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:25.216842    4256 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:25.217051    4256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:25.217056    4256 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:25.217059    4256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:25.217280    4256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:25.217436    4256 out.go:298] Setting JSON to false
	I0731 15:02:25.217448    4256 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:02:25.217499    4256 notify.go:220] Checking for updates...
	I0731 15:02:25.217734    4256 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:25.217742    4256 status.go:255] checking status of multinode-740000 ...
	I0731 15:02:25.218019    4256 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:02:25.218024    4256 status.go:343] host is not running, skipping remaining checks
	I0731 15:02:25.218027    4256 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-740000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (32.21375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-740000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-740000
E0731 15:02:25.960035    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-740000: (3.453040667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-740000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-740000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220807208s)

                                                
                                                
-- stdout --
	* [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	* Restarting existing qemu2 VM for "multinode-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:28.793751    4280 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:28.793983    4280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:28.793987    4280 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:28.793990    4280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:28.794177    4280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:28.795557    4280 out.go:298] Setting JSON to false
	I0731 15:02:28.814857    4280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3712,"bootTime":1722459636,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:02:28.814924    4280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:02:28.819568    4280 out.go:177] * [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:02:28.826523    4280 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:02:28.826567    4280 notify.go:220] Checking for updates...
	I0731 15:02:28.833460    4280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:02:28.836475    4280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:02:28.839529    4280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:02:28.842537    4280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:02:28.845487    4280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:02:28.848771    4280 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:28.848827    4280 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:02:28.853425    4280 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:02:28.860644    4280 start.go:297] selected driver: qemu2
	I0731 15:02:28.860653    4280 start.go:901] validating driver "qemu2" against &{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:02:28.860726    4280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:02:28.863307    4280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:02:28.863358    4280 cni.go:84] Creating CNI manager for ""
	I0731 15:02:28.863363    4280 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 15:02:28.863420    4280 start.go:340] cluster config:
	{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:02:28.867391    4280 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:02:28.874555    4280 out.go:177] * Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	I0731 15:02:28.878440    4280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:02:28.878454    4280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:02:28.878464    4280 cache.go:56] Caching tarball of preloaded images
	I0731 15:02:28.878517    4280 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:02:28.878523    4280 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:02:28.878587    4280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/multinode-740000/config.json ...
	I0731 15:02:28.879010    4280 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:02:28.879047    4280 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "multinode-740000"
	I0731 15:02:28.879058    4280 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:02:28.879064    4280 fix.go:54] fixHost starting: 
	I0731 15:02:28.879196    4280 fix.go:112] recreateIfNeeded on multinode-740000: state=Stopped err=<nil>
	W0731 15:02:28.879205    4280 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:02:28.886444    4280 out.go:177] * Restarting existing qemu2 VM for "multinode-740000" ...
	I0731 15:02:28.890462    4280 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:02:28.890512    4280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:61:10:2d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 15:02:28.892915    4280 main.go:141] libmachine: STDOUT: 
	I0731 15:02:28.892937    4280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:02:28.892966    4280 fix.go:56] duration metric: took 13.903125ms for fixHost
	I0731 15:02:28.892971    4280 start.go:83] releasing machines lock for "multinode-740000", held for 13.918625ms
	W0731 15:02:28.892980    4280 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:02:28.893015    4280 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:02:28.893021    4280 start.go:729] Will try again in 5 seconds ...
	I0731 15:02:33.895193    4280 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:02:33.895654    4280 start.go:364] duration metric: took 323.167µs to acquireMachinesLock for "multinode-740000"
	I0731 15:02:33.895781    4280 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:02:33.895805    4280 fix.go:54] fixHost starting: 
	I0731 15:02:33.896576    4280 fix.go:112] recreateIfNeeded on multinode-740000: state=Stopped err=<nil>
	W0731 15:02:33.896604    4280 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:02:33.901174    4280 out.go:177] * Restarting existing qemu2 VM for "multinode-740000" ...
	I0731 15:02:33.907948    4280 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:02:33.908231    4280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:61:10:2d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 15:02:33.918031    4280 main.go:141] libmachine: STDOUT: 
	I0731 15:02:33.918196    4280 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:02:33.918285    4280 fix.go:56] duration metric: took 22.480709ms for fixHost
	I0731 15:02:33.918300    4280 start.go:83] releasing machines lock for "multinode-740000", held for 22.619958ms
	W0731 15:02:33.918486    4280 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:02:33.927110    4280 out.go:177] 
	W0731 15:02:33.931139    4280 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:02:33.931168    4280 out.go:239] * 
	* 
	W0731 15:02:33.933861    4280 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:02:33.942094    4280 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-740000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-740000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (31.990125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 node delete m03: exit status 83 (39.317333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-740000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-740000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr: exit status 7 (29.071667ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:34.122244    4296 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:34.122402    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:34.122405    4296 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:34.122407    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:34.122539    4296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:34.122665    4296 out.go:298] Setting JSON to false
	I0731 15:02:34.122674    4296 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:02:34.122736    4296 notify.go:220] Checking for updates...
	I0731 15:02:34.122861    4296 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:34.122867    4296 status.go:255] checking status of multinode-740000 ...
	I0731 15:02:34.123073    4296 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:02:34.123077    4296 status.go:343] host is not running, skipping remaining checks
	I0731 15:02:34.123079    4296 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (28.545166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-740000 stop: (3.310484834s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status: exit status 7 (62.042166ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr: exit status 7 (32.651416ms)

                                                
                                                
-- stdout --
	multinode-740000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:37.556575    4324 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:37.556698    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:37.556701    4324 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:37.556704    4324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:37.556836    4324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:37.556948    4324 out.go:298] Setting JSON to false
	I0731 15:02:37.556958    4324 mustload.go:65] Loading cluster: multinode-740000
	I0731 15:02:37.557000    4324 notify.go:220] Checking for updates...
	I0731 15:02:37.557171    4324 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:37.557177    4324 status.go:255] checking status of multinode-740000 ...
	I0731 15:02:37.557390    4324 status.go:330] multinode-740000 host status = "Stopped" (err=<nil>)
	I0731 15:02:37.557394    4324 status.go:343] host is not running, skipping remaining checks
	I0731 15:02:37.557396    4324 status.go:257] multinode-740000 status: &{Name:multinode-740000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr": multinode-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-740000 status --alsologtostderr": multinode-740000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.371083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-740000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-740000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181326125s)

                                                
                                                
-- stdout --
	* [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	* Restarting existing qemu2 VM for "multinode-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-740000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:02:37.614712    4328 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:02:37.614843    4328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:37.614846    4328 out.go:304] Setting ErrFile to fd 2...
	I0731 15:02:37.614849    4328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:02:37.614988    4328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:02:37.615967    4328 out.go:298] Setting JSON to false
	I0731 15:02:37.631902    4328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3721,"bootTime":1722459636,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:02:37.631972    4328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:02:37.637282    4328 out.go:177] * [multinode-740000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:02:37.643211    4328 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:02:37.643248    4328 notify.go:220] Checking for updates...
	I0731 15:02:37.651214    4328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:02:37.655184    4328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:02:37.658211    4328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:02:37.661207    4328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:02:37.664193    4328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:02:37.667412    4328 config.go:182] Loaded profile config "multinode-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:02:37.667678    4328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:02:37.672199    4328 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:02:37.679161    4328 start.go:297] selected driver: qemu2
	I0731 15:02:37.679166    4328 start.go:901] validating driver "qemu2" against &{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:02:37.679218    4328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:02:37.681559    4328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:02:37.681585    4328 cni.go:84] Creating CNI manager for ""
	I0731 15:02:37.681590    4328 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 15:02:37.681630    4328 start.go:340] cluster config:
	{Name:multinode-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-740000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:02:37.685076    4328 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:02:37.691292    4328 out.go:177] * Starting "multinode-740000" primary control-plane node in "multinode-740000" cluster
	I0731 15:02:37.695202    4328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:02:37.695220    4328 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:02:37.695234    4328 cache.go:56] Caching tarball of preloaded images
	I0731 15:02:37.695296    4328 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:02:37.695304    4328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:02:37.695367    4328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/multinode-740000/config.json ...
	I0731 15:02:37.695789    4328 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:02:37.695826    4328 start.go:364] duration metric: took 31.291µs to acquireMachinesLock for "multinode-740000"
	I0731 15:02:37.695836    4328 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:02:37.695840    4328 fix.go:54] fixHost starting: 
	I0731 15:02:37.695956    4328 fix.go:112] recreateIfNeeded on multinode-740000: state=Stopped err=<nil>
	W0731 15:02:37.695965    4328 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:02:37.700175    4328 out.go:177] * Restarting existing qemu2 VM for "multinode-740000" ...
	I0731 15:02:37.708050    4328 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:02:37.708087    4328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:61:10:2d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 15:02:37.710271    4328 main.go:141] libmachine: STDOUT: 
	I0731 15:02:37.710290    4328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:02:37.710322    4328 fix.go:56] duration metric: took 14.480916ms for fixHost
	I0731 15:02:37.710326    4328 start.go:83] releasing machines lock for "multinode-740000", held for 14.495834ms
	W0731 15:02:37.710334    4328 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:02:37.710371    4328 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:02:37.710377    4328 start.go:729] Will try again in 5 seconds ...
	I0731 15:02:42.712446    4328 start.go:360] acquireMachinesLock for multinode-740000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:02:42.712889    4328 start.go:364] duration metric: took 316.292µs to acquireMachinesLock for "multinode-740000"
	I0731 15:02:42.713029    4328 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:02:42.713052    4328 fix.go:54] fixHost starting: 
	I0731 15:02:42.713753    4328 fix.go:112] recreateIfNeeded on multinode-740000: state=Stopped err=<nil>
	W0731 15:02:42.713779    4328 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:02:42.721954    4328 out.go:177] * Restarting existing qemu2 VM for "multinode-740000" ...
	I0731 15:02:42.725119    4328 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:02:42.725399    4328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:61:10:2d:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/multinode-740000/disk.qcow2
	I0731 15:02:42.734646    4328 main.go:141] libmachine: STDOUT: 
	I0731 15:02:42.734742    4328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:02:42.734850    4328 fix.go:56] duration metric: took 21.801542ms for fixHost
	I0731 15:02:42.734871    4328 start.go:83] releasing machines lock for "multinode-740000", held for 21.961458ms
	W0731 15:02:42.735078    4328 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-740000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:02:42.741103    4328 out.go:177] 
	W0731 15:02:42.745132    4328 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:02:42.745166    4328 out.go:239] * 
	* 
	W0731 15:02:42.747581    4328 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:02:42.756125    4328 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-740000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (67.101334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-740000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-740000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-740000-m01 --driver=qemu2 : exit status 80 (9.838716875s)

                                                
                                                
-- stdout --
	* [multinode-740000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-740000-m01" primary control-plane node in "multinode-740000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-740000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-740000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-740000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-740000-m02 --driver=qemu2 : exit status 80 (9.979425375s)

                                                
                                                
-- stdout --
	* [multinode-740000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-740000-m02" primary control-plane node in "multinode-740000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-740000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-740000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-740000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-740000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-740000: exit status 83 (82.007ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-740000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-740000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-740000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-740000 -n multinode-740000: exit status 7 (29.963917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-740000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.04s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.814581125s)

                                                
                                                
-- stdout --
	* [test-preload-612000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-612000" primary control-plane node in "test-preload-612000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-612000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:03:03.011076    4393 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:03:03.011221    4393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:03.011225    4393 out.go:304] Setting ErrFile to fd 2...
	I0731 15:03:03.011227    4393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:03:03.011363    4393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:03:03.012409    4393 out.go:298] Setting JSON to false
	I0731 15:03:03.028487    4393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3747,"bootTime":1722459636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:03:03.028553    4393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:03:03.033655    4393 out.go:177] * [test-preload-612000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:03:03.041686    4393 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:03:03.041746    4393 notify.go:220] Checking for updates...
	I0731 15:03:03.049580    4393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:03:03.052645    4393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:03:03.055691    4393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:03:03.058610    4393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:03:03.061631    4393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:03:03.064905    4393 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:03:03.064954    4393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:03:03.068571    4393 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:03:03.074569    4393 start.go:297] selected driver: qemu2
	I0731 15:03:03.074575    4393 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:03:03.074581    4393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:03:03.076957    4393 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:03:03.079612    4393 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:03:03.082688    4393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:03:03.082724    4393 cni.go:84] Creating CNI manager for ""
	I0731 15:03:03.082731    4393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:03:03.082740    4393 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:03:03.082774    4393 start.go:340] cluster config:
	{Name:test-preload-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:03:03.086546    4393 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.093587    4393 out.go:177] * Starting "test-preload-612000" primary control-plane node in "test-preload-612000" cluster
	I0731 15:03:03.097650    4393 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 15:03:03.097751    4393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/test-preload-612000/config.json ...
	I0731 15:03:03.097776    4393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/test-preload-612000/config.json: {Name:mkb8bdcf798799e25ac1297e0a3911bc7f90908c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:03:03.097781    4393 cache.go:107] acquiring lock: {Name:mkd1a0036729f2aecb30e56732968eecdf60281e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.097803    4393 cache.go:107] acquiring lock: {Name:mk4289563d8c8630d7627ee9a56dfe2fad3c5cc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.097813    4393 cache.go:107] acquiring lock: {Name:mk6ff82a358126caf3563a88b14c1a33877b9425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.097947    4393 cache.go:107] acquiring lock: {Name:mk48bc10236c6c807d7daa06e2764fddc44529d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.098019    4393 start.go:360] acquireMachinesLock for test-preload-612000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:03.097781    4393 cache.go:107] acquiring lock: {Name:mk7b3a1051acec00982a005dce2e525d14b599d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.098041    4393 cache.go:107] acquiring lock: {Name:mk1f1ae90d47285d33c426adeb9145093f1dd499 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.098055    4393 start.go:364] duration metric: took 26.334µs to acquireMachinesLock for "test-preload-612000"
	I0731 15:03:03.098067    4393 cache.go:107] acquiring lock: {Name:mk51cbdf51a844fe8a5507965a8719d269ce1b4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.098060    4393 cache.go:107] acquiring lock: {Name:mkdf9099f45d0bfacc76bfcf45a69cd9b9ea26a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:03:03.098067    4393 start.go:93] Provisioning new machine with config: &{Name:test-preload-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:03.098166    4393 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 15:03:03.098168    4393 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 15:03:03.098176    4393 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:03.098179    4393 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 15:03:03.098167    4393 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 15:03:03.098370    4393 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:03:03.098171    4393 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:03:03.098257    4393 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 15:03:03.098695    4393 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:03:03.106558    4393 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:03:03.110120    4393 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 15:03:03.110989    4393 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 15:03:03.111093    4393 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 15:03:03.111054    4393 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:03:03.113003    4393 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:03:03.113065    4393 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 15:03:03.113085    4393 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 15:03:03.113314    4393 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:03:03.124246    4393 start.go:159] libmachine.API.Create for "test-preload-612000" (driver="qemu2")
	I0731 15:03:03.124267    4393 client.go:168] LocalClient.Create starting
	I0731 15:03:03.124356    4393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:03.124386    4393 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:03.124395    4393 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:03.124431    4393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:03.124453    4393 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:03.124461    4393 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:03.124767    4393 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:03.277725    4393 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:03.369516    4393 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:03.369536    4393 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:03.369697    4393 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:03.379646    4393 main.go:141] libmachine: STDOUT: 
	I0731 15:03:03.379668    4393 main.go:141] libmachine: STDERR: 
	I0731 15:03:03.379724    4393 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2 +20000M
	I0731 15:03:03.388276    4393 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:03.388295    4393 main.go:141] libmachine: STDERR: 
	I0731 15:03:03.388316    4393 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:03.388319    4393 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:03.388332    4393 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:03.388373    4393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a9:24:4c:dd:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:03.390326    4393 main.go:141] libmachine: STDOUT: 
	I0731 15:03:03.390344    4393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:03.390359    4393 client.go:171] duration metric: took 266.092916ms to LocalClient.Create
	I0731 15:03:03.512735    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 15:03:03.548391    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 15:03:03.569685    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 15:03:03.591743    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0731 15:03:03.609996    4393 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 15:03:03.610016    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 15:03:03.660331    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 15:03:03.663681    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 15:03:03.696175    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 15:03:03.696194    4393 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 598.400042ms
	I0731 15:03:03.696213    4393 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0731 15:03:04.101613    4393 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 15:03:04.101713    4393 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 15:03:04.310879    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 15:03:04.310922    4393 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.213159667s
	I0731 15:03:04.310946    4393 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 15:03:05.102812    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 15:03:05.102882    4393 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.004866583s
	I0731 15:03:05.102919    4393 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 15:03:05.390561    4393 start.go:128] duration metric: took 2.292370583s to createHost
	I0731 15:03:05.390615    4393 start.go:83] releasing machines lock for "test-preload-612000", held for 2.292590458s
	W0731 15:03:05.390682    4393 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:05.401388    4393 out.go:177] * Deleting "test-preload-612000" in qemu2 ...
	W0731 15:03:05.432365    4393 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:05.432401    4393 start.go:729] Will try again in 5 seconds ...
	I0731 15:03:06.377535    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 15:03:06.377578    4393 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.279845875s
	I0731 15:03:06.377604    4393 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 15:03:07.493180    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 15:03:07.493242    4393 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.395544708s
	I0731 15:03:07.493270    4393 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 15:03:07.827978    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 15:03:07.828041    4393 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.730180208s
	I0731 15:03:07.828063    4393 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 15:03:09.553459    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 15:03:09.553512    4393 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.455601666s
	I0731 15:03:09.553595    4393 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 15:03:10.432664    4393 start.go:360] acquireMachinesLock for test-preload-612000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:03:10.433116    4393 start.go:364] duration metric: took 351.167µs to acquireMachinesLock for "test-preload-612000"
	I0731 15:03:10.433235    4393 start.go:93] Provisioning new machine with config: &{Name:test-preload-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:03:10.433634    4393 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:03:10.443188    4393 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:03:10.494414    4393 start.go:159] libmachine.API.Create for "test-preload-612000" (driver="qemu2")
	I0731 15:03:10.494452    4393 client.go:168] LocalClient.Create starting
	I0731 15:03:10.494587    4393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:03:10.494654    4393 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:10.494674    4393 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:10.494741    4393 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:03:10.494789    4393 main.go:141] libmachine: Decoding PEM data...
	I0731 15:03:10.494804    4393 main.go:141] libmachine: Parsing certificate...
	I0731 15:03:10.495319    4393 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:03:10.560137    4393 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 15:03:10.560156    4393 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.46222575s
	I0731 15:03:10.560163    4393 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 15:03:10.560176    4393 cache.go:87] Successfully saved all images to host disk.
	I0731 15:03:10.656174    4393 main.go:141] libmachine: Creating SSH key...
	I0731 15:03:10.724050    4393 main.go:141] libmachine: Creating Disk image...
	I0731 15:03:10.724058    4393 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:03:10.724254    4393 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:10.733697    4393 main.go:141] libmachine: STDOUT: 
	I0731 15:03:10.733716    4393 main.go:141] libmachine: STDERR: 
	I0731 15:03:10.733773    4393 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2 +20000M
	I0731 15:03:10.741802    4393 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:03:10.741815    4393 main.go:141] libmachine: STDERR: 
	I0731 15:03:10.741827    4393 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:10.741831    4393 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:03:10.741843    4393 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:03:10.741883    4393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:25:08:05:ed:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/test-preload-612000/disk.qcow2
	I0731 15:03:10.743592    4393 main.go:141] libmachine: STDOUT: 
	I0731 15:03:10.743608    4393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:03:10.743622    4393 client.go:171] duration metric: took 249.170166ms to LocalClient.Create
	I0731 15:03:12.745892    4393 start.go:128] duration metric: took 2.312246625s to createHost
	I0731 15:03:12.745989    4393 start.go:83] releasing machines lock for "test-preload-612000", held for 2.312889583s
	W0731 15:03:12.746293    4393 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:03:12.760978    4393 out.go:177] 
	W0731 15:03:12.765032    4393 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:03:12.765063    4393 out.go:239] * 
	* 
	W0731 15:03:12.767747    4393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:03:12.781928    4393 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-612000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-31 15:03:12.80022 -0700 PDT m=+2224.244313209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-612000 -n test-preload-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-612000 -n test-preload-612000: exit status 7 (65.628875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-612000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-612000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-665000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-665000 --memory=2048 --driver=qemu2 : exit status 80 (9.871757875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-665000" primary control-plane node in "scheduled-stop-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-665000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-665000" primary control-plane node in "scheduled-stop-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 15:03:22.815054 -0700 PDT m=+2234.259331126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-665000 -n scheduled-stop-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-665000 -n scheduled-stop-665000: exit status 7 (68.176333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-665000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-665000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (12.31s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1646413367 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1646413367 version: (1.069790541s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-182000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-182000 --memory=2600 --driver=qemu2 : exit status 80 (9.896123458s)

                                                
                                                
-- stdout --
	* [skaffold-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-182000" primary control-plane node in "skaffold-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-182000" primary control-plane node in "skaffold-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 15:03:35.12572 -0700 PDT m=+2246.570222584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-182000 -n skaffold-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-182000 -n skaffold-182000: exit status 7 (62.331458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-182000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-182000
--- FAIL: TestSkaffold (12.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (600.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.785067210 start -p running-upgrade-683000 --memory=2200 --vm-driver=qemu2 
E0731 15:05:18.286024    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.785067210 start -p running-upgrade-683000 --memory=2200 --vm-driver=qemu2 : (1m5.935353208s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-683000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-683000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m20.242959667s)

                                                
                                                
-- stdout --
	* [running-upgrade-683000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-683000" primary control-plane node in "running-upgrade-683000" cluster
	* Updating the running qemu2 "running-upgrade-683000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:05:23.972835    4804 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:05:23.972974    4804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:05:23.972978    4804 out.go:304] Setting ErrFile to fd 2...
	I0731 15:05:23.972980    4804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:05:23.973117    4804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:05:23.974187    4804 out.go:298] Setting JSON to false
	I0731 15:05:23.990657    4804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3887,"bootTime":1722459636,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:05:23.990730    4804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:05:23.994996    4804 out.go:177] * [running-upgrade-683000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:05:24.001963    4804 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:05:24.001995    4804 notify.go:220] Checking for updates...
	I0731 15:05:24.008965    4804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:05:24.011945    4804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:05:24.014927    4804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:05:24.017960    4804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:05:24.020990    4804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:05:24.024141    4804 config.go:182] Loaded profile config "running-upgrade-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:05:24.026881    4804 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 15:05:24.029934    4804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:05:24.032840    4804 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:05:24.039955    4804 start.go:297] selected driver: qemu2
	I0731 15:05:24.039962    4804 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-683000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:05:24.040034    4804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:05:24.042213    4804 cni.go:84] Creating CNI manager for ""
	I0731 15:05:24.042230    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:05:24.042255    4804 start.go:340] cluster config:
	{Name:running-upgrade-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-683000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:05:24.042298    4804 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:05:24.049911    4804 out.go:177] * Starting "running-upgrade-683000" primary control-plane node in "running-upgrade-683000" cluster
	I0731 15:05:24.053968    4804 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:05:24.053984    4804 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 15:05:24.053994    4804 cache.go:56] Caching tarball of preloaded images
	I0731 15:05:24.054063    4804 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:05:24.054076    4804 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 15:05:24.054128    4804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/config.json ...
	I0731 15:05:24.054462    4804 start.go:360] acquireMachinesLock for running-upgrade-683000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:05:24.054496    4804 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "running-upgrade-683000"
	I0731 15:05:24.054504    4804 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:05:24.054510    4804 fix.go:54] fixHost starting: 
	I0731 15:05:24.055178    4804 fix.go:112] recreateIfNeeded on running-upgrade-683000: state=Running err=<nil>
	W0731 15:05:24.055186    4804 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:05:24.057927    4804 out.go:177] * Updating the running qemu2 "running-upgrade-683000" VM ...
	I0731 15:05:24.065917    4804 machine.go:94] provisionDockerMachine start ...
	I0731 15:05:24.065951    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.066052    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.066056    4804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 15:05:24.133608    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-683000
	
	I0731 15:05:24.133620    4804 buildroot.go:166] provisioning hostname "running-upgrade-683000"
	I0731 15:05:24.133658    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.133765    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.133770    4804 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-683000 && echo "running-upgrade-683000" | sudo tee /etc/hostname
	I0731 15:05:24.203346    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-683000
	
	I0731 15:05:24.203405    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.203523    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.203530    4804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-683000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-683000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-683000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 15:05:24.269656    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 15:05:24.269668    4804 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1411/.minikube}
	I0731 15:05:24.269676    4804 buildroot.go:174] setting up certificates
	I0731 15:05:24.269681    4804 provision.go:84] configureAuth start
	I0731 15:05:24.269685    4804 provision.go:143] copyHostCerts
	I0731 15:05:24.269740    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem, removing ...
	I0731 15:05:24.269748    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem
	I0731 15:05:24.269878    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem (1078 bytes)
	I0731 15:05:24.270065    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem, removing ...
	I0731 15:05:24.270069    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem
	I0731 15:05:24.270124    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem (1123 bytes)
	I0731 15:05:24.270251    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem, removing ...
	I0731 15:05:24.270254    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem
	I0731 15:05:24.270306    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem (1679 bytes)
	I0731 15:05:24.270408    4804 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-683000 san=[127.0.0.1 localhost minikube running-upgrade-683000]
	I0731 15:05:24.356457    4804 provision.go:177] copyRemoteCerts
	I0731 15:05:24.356483    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 15:05:24.356489    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:05:24.391405    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 15:05:24.398213    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 15:05:24.404731    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 15:05:24.411851    4804 provision.go:87] duration metric: took 142.169042ms to configureAuth
	I0731 15:05:24.411863    4804 buildroot.go:189] setting minikube options for container-runtime
	I0731 15:05:24.411970    4804 config.go:182] Loaded profile config "running-upgrade-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:05:24.412003    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.412099    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.412104    4804 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 15:05:24.478050    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 15:05:24.478057    4804 buildroot.go:70] root file system type: tmpfs
	I0731 15:05:24.478112    4804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 15:05:24.478167    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.478275    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.478312    4804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 15:05:24.548380    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 15:05:24.548422    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.548531    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.548540    4804 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 15:05:24.613200    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 15:05:24.613213    4804 machine.go:97] duration metric: took 547.300083ms to provisionDockerMachine
	I0731 15:05:24.613220    4804 start.go:293] postStartSetup for "running-upgrade-683000" (driver="qemu2")
	I0731 15:05:24.613226    4804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 15:05:24.613277    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 15:05:24.613286    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:05:24.650682    4804 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 15:05:24.652448    4804 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 15:05:24.652456    4804 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/addons for local assets ...
	I0731 15:05:24.652541    4804 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/files for local assets ...
	I0731 15:05:24.652662    4804 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem -> 19132.pem in /etc/ssl/certs
	I0731 15:05:24.652781    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 15:05:24.656327    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:05:24.666217    4804 start.go:296] duration metric: took 52.990708ms for postStartSetup
	I0731 15:05:24.666235    4804 fix.go:56] duration metric: took 611.738458ms for fixHost
	I0731 15:05:24.666290    4804 main.go:141] libmachine: Using SSH client type: native
	I0731 15:05:24.666405    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101166a10] 0x101169270 <nil>  [] 0s} localhost 50272 <nil> <nil>}
	I0731 15:05:24.666409    4804 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 15:05:24.729558    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463524.587532512
	
	I0731 15:05:24.729565    4804 fix.go:216] guest clock: 1722463524.587532512
	I0731 15:05:24.729569    4804 fix.go:229] Guest: 2024-07-31 15:05:24.587532512 -0700 PDT Remote: 2024-07-31 15:05:24.666237 -0700 PDT m=+0.713830459 (delta=-78.704488ms)
	I0731 15:05:24.729579    4804 fix.go:200] guest clock delta is within tolerance: -78.704488ms
	I0731 15:05:24.729583    4804 start.go:83] releasing machines lock for "running-upgrade-683000", held for 675.094542ms
	I0731 15:05:24.729643    4804 ssh_runner.go:195] Run: cat /version.json
	I0731 15:05:24.729654    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:05:24.729646    4804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 15:05:24.729714    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	W0731 15:05:24.730197    4804 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50272: connect: connection refused
	I0731 15:05:24.730214    4804 retry.go:31] will retry after 305.91547ms: dial tcp [::1]:50272: connect: connection refused
	W0731 15:05:24.761410    4804 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 15:05:24.761457    4804 ssh_runner.go:195] Run: systemctl --version
	I0731 15:05:24.763276    4804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 15:05:24.766427    4804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 15:05:24.766463    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 15:05:24.769234    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 15:05:24.773841    4804 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 15:05:24.773848    4804 start.go:495] detecting cgroup driver to use...
	I0731 15:05:24.773921    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:05:24.778850    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 15:05:24.781744    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 15:05:24.784720    4804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 15:05:24.784739    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 15:05:24.788215    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:05:24.791793    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 15:05:24.794855    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:05:24.797618    4804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 15:05:24.800569    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 15:05:24.804359    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 15:05:24.807917    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 15:05:24.811173    4804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 15:05:24.813985    4804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 15:05:24.816708    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:24.902104    4804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 15:05:24.910836    4804 start.go:495] detecting cgroup driver to use...
	I0731 15:05:24.910901    4804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 15:05:24.916113    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:05:24.921141    4804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 15:05:24.927343    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:05:24.931905    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 15:05:24.936599    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:05:24.941626    4804 ssh_runner.go:195] Run: which cri-dockerd
	I0731 15:05:24.942915    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 15:05:24.945392    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 15:05:24.950460    4804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 15:05:25.044252    4804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 15:05:25.142246    4804 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 15:05:25.142313    4804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 15:05:25.147708    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:25.245532    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:05:26.822454    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.576934208s)
	I0731 15:05:26.822520    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 15:05:26.827279    4804 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 15:05:26.833281    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:05:26.837985    4804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 15:05:26.928964    4804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 15:05:26.994844    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:27.075028    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 15:05:27.081137    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:05:27.086250    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:27.167563    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 15:05:27.207033    4804 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 15:05:27.207106    4804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 15:05:27.209152    4804 start.go:563] Will wait 60s for crictl version
	I0731 15:05:27.209209    4804 ssh_runner.go:195] Run: which crictl
	I0731 15:05:27.210731    4804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 15:05:27.222432    4804 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 15:05:27.222506    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:05:27.234242    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:05:27.254920    4804 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 15:05:27.255042    4804 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 15:05:27.256508    4804 kubeadm.go:883] updating cluster {Name:running-upgrade-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-683000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 15:05:27.256547    4804 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:05:27.256590    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:05:27.267143    4804 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:05:27.267152    4804 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:05:27.267200    4804 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:05:27.270236    4804 ssh_runner.go:195] Run: which lz4
	I0731 15:05:27.271520    4804 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 15:05:27.272622    4804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 15:05:27.272631    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 15:05:28.237696    4804 docker.go:649] duration metric: took 966.227667ms to copy over tarball
	I0731 15:05:28.237750    4804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 15:05:29.392721    4804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.154978417s)
	I0731 15:05:29.392736    4804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 15:05:29.408653    4804 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:05:29.412177    4804 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 15:05:29.417596    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:29.482584    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:05:29.794553    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:05:29.810502    4804 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:05:29.810518    4804 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:05:29.810523    4804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 15:05:29.814350    4804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:05:29.815997    4804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:05:29.817818    4804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:05:29.818032    4804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:05:29.820198    4804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:05:29.820294    4804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:05:29.821373    4804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:05:29.821392    4804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:05:29.822853    4804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:05:29.822881    4804 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:05:29.824276    4804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:05:29.824615    4804 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 15:05:29.825710    4804 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:05:29.825783    4804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:05:29.826786    4804 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 15:05:29.827474    4804 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:05:30.252558    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:05:30.253116    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:05:30.272173    4804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 15:05:30.272201    4804 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:05:30.272264    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:05:30.272264    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:05:30.272431    4804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 15:05:30.272444    4804 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:05:30.272470    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:05:30.273378    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:05:30.295409    4804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 15:05:30.295429    4804 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:05:30.295449    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 15:05:30.295487    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:05:30.296411    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 15:05:30.297816    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 15:05:30.299919    4804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 15:05:30.299937    4804 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:05:30.299981    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:05:30.309163    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 15:05:30.312416    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 15:05:30.312439    4804 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 15:05:30.312454    4804 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:05:30.312498    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 15:05:30.323806    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 15:05:30.324257    4804 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 15:05:30.324279    4804 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 15:05:30.324330    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0731 15:05:30.330242    4804 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 15:05:30.330379    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:05:30.341378    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 15:05:30.341841    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 15:05:30.341843    4804 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 15:05:30.341857    4804 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:05:30.341901    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:05:30.344197    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 15:05:30.344197    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 15:05:30.352238    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 15:05:30.352265    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 15:05:30.352282    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 15:05:30.352294    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 15:05:30.352345    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 15:05:30.352437    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:05:30.354957    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 15:05:30.354983    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 15:05:30.364458    4804 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 15:05:30.364471    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0731 15:05:30.389917    4804 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 15:05:30.390036    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:05:30.443354    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 15:05:30.449659    4804 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:05:30.449685    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 15:05:30.456598    4804 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 15:05:30.456620    4804 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:05:30.456678    4804 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:05:30.528327    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 15:05:30.719527    4804 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 15:05:30.719543    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 15:05:31.124325    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 15:05:31.124408    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 15:05:31.124756    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:05:31.131139    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 15:05:31.131219    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 15:05:31.201351    4804 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:05:31.201373    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 15:05:31.465826    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 15:05:31.465866    4804 cache_images.go:92] duration metric: took 1.655366375s to LoadCachedImages
	W0731 15:05:31.465907    4804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0731 15:05:31.465914    4804 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 15:05:31.465971    4804 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-683000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-683000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 15:05:31.466041    4804 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 15:05:31.479473    4804 cni.go:84] Creating CNI manager for ""
	I0731 15:05:31.479488    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:05:31.479494    4804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 15:05:31.479503    4804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-683000 NodeName:running-upgrade-683000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 15:05:31.479569    4804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-683000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 15:05:31.479627    4804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 15:05:31.482793    4804 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 15:05:31.482822    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 15:05:31.485298    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 15:05:31.490793    4804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 15:05:31.495915    4804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 15:05:31.501487    4804 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 15:05:31.503100    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:05:31.588742    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:05:31.594101    4804 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000 for IP: 10.0.2.15
	I0731 15:05:31.594108    4804 certs.go:194] generating shared ca certs ...
	I0731 15:05:31.594116    4804 certs.go:226] acquiring lock for ca certs: {Name:mk0bfd7451d2ce366c95ee7ce2af2fa5265e7335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:05:31.594269    4804 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key
	I0731 15:05:31.594303    4804 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key
	I0731 15:05:31.594309    4804 certs.go:256] generating profile certs ...
	I0731 15:05:31.594368    4804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.key
	I0731 15:05:31.594388    4804 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key.6612542a
	I0731 15:05:31.594400    4804 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt.6612542a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 15:05:31.695099    4804 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt.6612542a ...
	I0731 15:05:31.695105    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt.6612542a: {Name:mkfa2733d709d26143e640d6b5144cefa3a5b71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:05:31.695466    4804 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key.6612542a ...
	I0731 15:05:31.695472    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key.6612542a: {Name:mk9e77f3bd88096f299a6d6da8c171f4e47b54d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:05:31.695620    4804 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt.6612542a -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt
	I0731 15:05:31.695754    4804 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key.6612542a -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key
	I0731 15:05:31.695877    4804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/proxy-client.key
	I0731 15:05:31.696241    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem (1338 bytes)
	W0731 15:05:31.696281    4804 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913_empty.pem, impossibly tiny 0 bytes
	I0731 15:05:31.696289    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 15:05:31.696312    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem (1078 bytes)
	I0731 15:05:31.696331    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem (1123 bytes)
	I0731 15:05:31.696351    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem (1679 bytes)
	I0731 15:05:31.696399    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:05:31.696733    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 15:05:31.704215    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 15:05:31.711899    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 15:05:31.719326    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 15:05:31.726416    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 15:05:31.732794    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 15:05:31.739760    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 15:05:31.746753    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 15:05:31.753295    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem --> /usr/share/ca-certificates/1913.pem (1338 bytes)
	I0731 15:05:31.760257    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /usr/share/ca-certificates/19132.pem (1708 bytes)
	I0731 15:05:31.767313    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 15:05:31.773955    4804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 15:05:31.778588    4804 ssh_runner.go:195] Run: openssl version
	I0731 15:05:31.780211    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1913.pem && ln -fs /usr/share/ca-certificates/1913.pem /etc/ssl/certs/1913.pem"
	I0731 15:05:31.783597    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1913.pem
	I0731 15:05:31.785037    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:34 /usr/share/ca-certificates/1913.pem
	I0731 15:05:31.785055    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1913.pem
	I0731 15:05:31.786899    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1913.pem /etc/ssl/certs/51391683.0"
	I0731 15:05:31.789500    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19132.pem && ln -fs /usr/share/ca-certificates/19132.pem /etc/ssl/certs/19132.pem"
	I0731 15:05:31.792762    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19132.pem
	I0731 15:05:31.794117    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:34 /usr/share/ca-certificates/19132.pem
	I0731 15:05:31.794140    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19132.pem
	I0731 15:05:31.795714    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 15:05:31.798440    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 15:05:31.801328    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:05:31.802725    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:05:31.802744    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:05:31.804512    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 15:05:31.807722    4804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 15:05:31.809246    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 15:05:31.811139    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 15:05:31.812961    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 15:05:31.814640    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 15:05:31.816654    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 15:05:31.818495    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 15:05:31.820178    4804 kubeadm.go:392] StartCluster: {Name:running-upgrade-683000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-683000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:05:31.820245    4804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:05:31.830432    4804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 15:05:31.833472    4804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 15:05:31.833477    4804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 15:05:31.833495    4804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 15:05:31.836517    4804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:05:31.836767    4804 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-683000" does not appear in /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:05:31.836823    4804 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1411/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-683000" cluster setting kubeconfig missing "running-upgrade-683000" context setting]
	I0731 15:05:31.836952    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:05:31.838094    4804 kapi.go:59] client config for running-upgrade-683000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024fc700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:05:31.838411    4804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 15:05:31.841210    4804 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-683000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 15:05:31.841215    4804 kubeadm.go:1160] stopping kube-system containers ...
	I0731 15:05:31.841252    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:05:31.852177    4804 docker.go:483] Stopping containers: [010ea24cdd43 0f4ffb3e58e9 c11f511d0f64 9b206accf119 e7a46ccd2d88 d4309a5fa412 490c12c1ecd4 b50b1b96fc94 70c9561862f0 d37dd9c477b4 50268143fc30 0ed38fe99bd6]
	I0731 15:05:31.852267    4804 ssh_runner.go:195] Run: docker stop 010ea24cdd43 0f4ffb3e58e9 c11f511d0f64 9b206accf119 e7a46ccd2d88 d4309a5fa412 490c12c1ecd4 b50b1b96fc94 70c9561862f0 d37dd9c477b4 50268143fc30 0ed38fe99bd6
	I0731 15:05:32.146805    4804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 15:05:32.231375    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:05:32.237206    4804 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 31 22:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 31 22:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 22:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 31 22:05 /etc/kubernetes/scheduler.conf
	
	I0731 15:05:32.237260    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I0731 15:05:32.240085    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:05:32.240126    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:05:32.244795    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I0731 15:05:32.248316    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:05:32.248355    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:05:32.253943    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I0731 15:05:32.258790    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:05:32.258829    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:05:32.261958    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I0731 15:05:32.266670    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:05:32.266708    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:05:32.269936    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:05:32.273860    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:05:32.297439    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:05:32.814348    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:05:33.013388    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:05:33.038747    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:05:33.058577    4804 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:05:33.058655    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:05:33.560757    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:05:34.060693    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:05:34.065395    4804 api_server.go:72] duration metric: took 1.0068385s to wait for apiserver process to appear ...
	I0731 15:05:34.065403    4804 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:05:34.065413    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:05:39.067483    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:05:39.067541    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:05:44.068446    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:05:44.068508    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:05:49.070486    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:05:49.070557    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:05:54.073542    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:05:54.073621    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:05:59.076512    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:05:59.076611    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:04.079797    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:04.079882    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:09.083043    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:09.083115    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:14.086194    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:14.086267    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:19.089096    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:19.089131    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:24.091664    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:24.091740    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:29.094527    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:29.094602    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:34.097323    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:34.097777    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:06:34.135661    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:06:34.135806    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:06:34.158236    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:06:34.158364    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:06:34.173279    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:06:34.173356    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:06:34.185961    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:06:34.186070    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:06:34.196538    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:06:34.196607    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:06:34.207143    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:06:34.207216    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:06:34.216929    4804 logs.go:276] 0 containers: []
	W0731 15:06:34.216940    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:06:34.216991    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:06:34.228311    4804 logs.go:276] 0 containers: []
	W0731 15:06:34.228322    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:06:34.228329    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:06:34.228334    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:06:34.240042    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:06:34.240054    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:06:34.311730    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:06:34.311743    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:06:34.325717    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:06:34.325727    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:06:34.337460    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:06:34.337474    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:06:34.362255    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:06:34.362265    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:06:34.400163    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:06:34.400169    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:06:34.404613    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:06:34.404618    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:06:34.425664    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:06:34.425676    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:06:34.440688    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:06:34.440697    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:06:34.452250    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:06:34.452259    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:06:34.465314    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:06:34.465326    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:06:34.482698    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:06:34.482707    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:06:34.496531    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:06:34.496541    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:06:34.512503    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:06:34.512512    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:06:37.029621    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:42.030960    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:42.031385    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:06:42.071011    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:06:42.071154    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:06:42.092324    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:06:42.092427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:06:42.107817    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:06:42.107889    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:06:42.120659    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:06:42.120747    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:06:42.131704    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:06:42.131775    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:06:42.142383    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:06:42.142459    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:06:42.152108    4804 logs.go:276] 0 containers: []
	W0731 15:06:42.152120    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:06:42.152179    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:06:42.162246    4804 logs.go:276] 0 containers: []
	W0731 15:06:42.162256    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:06:42.162264    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:06:42.162270    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:06:42.166447    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:06:42.166453    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:06:42.201878    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:06:42.201888    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:06:42.224951    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:06:42.224961    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:06:42.238999    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:06:42.239009    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:06:42.254837    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:06:42.254847    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:06:42.295593    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:06:42.295606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:06:42.330532    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:06:42.330542    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:06:42.345736    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:06:42.345748    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:06:42.362935    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:06:42.362943    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:06:42.388696    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:06:42.388703    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:06:42.399553    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:06:42.399563    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:06:42.413441    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:06:42.413450    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:06:42.428510    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:06:42.428519    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:06:42.440263    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:06:42.440275    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:06:44.953714    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:49.954765    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:49.955108    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:06:49.994886    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:06:49.995039    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:06:50.018708    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:06:50.018784    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:06:50.037264    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:06:50.037340    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:06:50.049111    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:06:50.049188    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:06:50.059410    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:06:50.059486    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:06:50.073739    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:06:50.073813    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:06:50.083935    4804 logs.go:276] 0 containers: []
	W0731 15:06:50.083946    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:06:50.084013    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:06:50.093780    4804 logs.go:276] 0 containers: []
	W0731 15:06:50.093789    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:06:50.093797    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:06:50.093802    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:06:50.111884    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:06:50.111897    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:06:50.148499    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:06:50.148514    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:06:50.162918    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:06:50.162932    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:06:50.182975    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:06:50.182988    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:06:50.196894    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:06:50.196905    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:06:50.208816    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:06:50.208829    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:06:50.220664    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:06:50.220677    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:06:50.246999    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:06:50.247014    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:06:50.262600    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:06:50.262614    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:06:50.275450    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:06:50.275462    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:06:50.287318    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:06:50.287329    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:06:50.329477    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:06:50.329494    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:06:50.334192    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:06:50.334200    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:06:50.347772    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:06:50.347782    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:06:52.865958    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:06:57.868815    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:06:57.869060    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:06:57.891101    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:06:57.891193    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:06:57.908324    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:06:57.908403    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:06:57.921670    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:06:57.921745    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:06:57.934343    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:06:57.934419    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:06:57.944897    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:06:57.944967    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:06:57.955890    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:06:57.955959    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:06:57.965881    4804 logs.go:276] 0 containers: []
	W0731 15:06:57.965892    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:06:57.965953    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:06:57.975850    4804 logs.go:276] 0 containers: []
	W0731 15:06:57.975860    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:06:57.975868    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:06:57.975873    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:06:58.010850    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:06:58.010864    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:06:58.024449    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:06:58.024463    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:06:58.036361    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:06:58.036372    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:06:58.047930    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:06:58.047942    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:06:58.088612    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:06:58.088621    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:06:58.103876    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:06:58.103886    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:06:58.122095    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:06:58.122105    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:06:58.126275    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:06:58.126284    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:06:58.145566    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:06:58.145574    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:06:58.156519    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:06:58.156529    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:06:58.169353    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:06:58.169363    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:06:58.180070    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:06:58.180089    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:06:58.207279    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:06:58.207291    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:06:58.221632    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:06:58.221645    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:00.738164    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:05.740881    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:05.741264    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:05.774334    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:05.774475    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:05.794321    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:05.794420    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:05.808239    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:05.808313    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:05.819945    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:05.820024    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:05.830940    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:05.831011    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:05.841611    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:05.841680    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:05.851738    4804 logs.go:276] 0 containers: []
	W0731 15:07:05.851751    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:05.851815    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:05.862157    4804 logs.go:276] 0 containers: []
	W0731 15:07:05.862169    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:05.862175    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:05.862180    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:05.875929    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:05.875942    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:05.890860    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:05.890873    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:05.906032    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:05.906045    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:05.924082    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:05.924093    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:05.928797    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:05.928804    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:05.942783    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:05.942796    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:05.956046    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:05.956058    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:05.981769    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:05.981776    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:06.017900    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:06.017909    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:06.029758    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:06.029768    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:06.041528    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:06.041538    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:06.052545    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:06.052561    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:06.071346    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:06.071357    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:06.111733    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:06.111740    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:08.633671    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:13.635902    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:13.636277    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:13.670652    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:13.670780    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:13.694015    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:13.694118    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:13.713678    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:13.713751    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:13.726759    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:13.726822    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:13.737441    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:13.737513    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:13.748347    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:13.748406    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:13.759059    4804 logs.go:276] 0 containers: []
	W0731 15:07:13.759069    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:13.759126    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:13.774284    4804 logs.go:276] 0 containers: []
	W0731 15:07:13.774295    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:13.774301    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:13.774306    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:13.785442    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:13.785453    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:13.811087    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:13.811096    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:13.823218    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:13.823231    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:13.864576    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:13.864584    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:13.885013    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:13.885025    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:13.901019    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:13.901032    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:13.912492    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:13.912503    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:13.916784    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:13.916793    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:13.930111    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:13.930120    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:13.945409    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:13.945418    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:13.957567    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:13.957579    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:13.996542    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:13.996555    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:14.010606    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:14.010619    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:14.026560    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:14.026572    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:16.546514    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:21.549398    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:21.549808    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:21.589717    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:21.589840    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:21.611463    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:21.611584    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:21.627097    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:21.627167    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:21.640049    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:21.640123    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:21.651387    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:21.651463    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:21.669607    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:21.669674    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:21.679664    4804 logs.go:276] 0 containers: []
	W0731 15:07:21.679675    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:21.679737    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:21.690016    4804 logs.go:276] 0 containers: []
	W0731 15:07:21.690026    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:21.690035    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:21.690041    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:21.703654    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:21.703667    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:21.717918    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:21.717931    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:21.732397    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:21.732407    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:21.750587    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:21.750597    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:21.776246    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:21.776252    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:21.790400    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:21.790413    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:21.801706    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:21.801717    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:21.813018    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:21.813030    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:21.833172    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:21.833185    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:21.845320    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:21.845333    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:21.880052    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:21.880065    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:21.895893    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:21.895902    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:21.907687    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:21.907700    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:21.948730    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:21.948737    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:24.455033    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:29.457346    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:29.457736    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:29.494190    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:29.494314    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:29.516357    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:29.516454    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:29.531810    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:29.531884    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:29.544158    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:29.544232    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:29.555784    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:29.555862    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:29.569571    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:29.569642    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:29.580405    4804 logs.go:276] 0 containers: []
	W0731 15:07:29.580416    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:29.580476    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:29.590628    4804 logs.go:276] 0 containers: []
	W0731 15:07:29.590655    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:29.590665    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:29.590670    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:29.595106    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:29.595114    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:29.606696    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:29.606707    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:29.618390    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:29.618403    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:29.637404    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:29.637416    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:29.673516    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:29.673533    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:29.690023    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:29.690038    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:29.730691    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:29.730699    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:29.744509    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:29.744524    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:29.758983    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:29.758996    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:29.770344    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:29.770355    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:29.782074    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:29.782086    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:29.800231    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:29.800244    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:29.820063    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:29.820076    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:29.836610    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:29.836619    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:32.363883    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:37.366100    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:37.366236    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:37.379246    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:37.379323    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:37.390667    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:37.390736    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:37.401223    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:37.401292    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:37.411791    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:37.411862    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:37.421938    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:37.422001    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:37.432365    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:37.432430    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:37.442956    4804 logs.go:276] 0 containers: []
	W0731 15:07:37.442968    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:37.443025    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:37.455186    4804 logs.go:276] 0 containers: []
	W0731 15:07:37.455199    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:37.455207    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:37.455213    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:37.459768    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:37.459777    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:37.474085    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:37.474096    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:37.497328    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:37.497337    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:37.508870    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:37.508881    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:37.525790    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:37.525802    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:37.536643    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:37.536657    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:37.577427    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:37.577435    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:37.611108    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:37.611120    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:37.630505    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:37.630517    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:37.641903    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:37.641913    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:37.655614    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:37.655625    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:37.669700    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:37.669713    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:37.680813    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:37.680825    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:37.694346    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:37.694358    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:40.210562    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:45.213268    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:45.213423    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:45.225484    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:45.225557    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:45.236255    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:45.236322    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:45.247160    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:45.247228    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:45.257728    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:45.257795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:45.272170    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:45.272237    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:45.283060    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:45.283128    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:45.297152    4804 logs.go:276] 0 containers: []
	W0731 15:07:45.297163    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:45.297223    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:45.309537    4804 logs.go:276] 0 containers: []
	W0731 15:07:45.309548    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:45.309556    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:45.309561    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:45.333706    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:45.333716    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:45.347260    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:45.347271    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:45.363047    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:45.363057    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:45.374546    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:45.374558    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:45.400095    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:45.400106    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:45.442499    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:45.442505    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:45.477631    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:45.477643    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:45.489677    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:45.489688    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:45.504254    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:45.504263    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:45.516509    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:45.516520    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:45.521068    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:45.521077    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:45.535301    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:45.535311    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:45.549730    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:45.549741    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:45.561763    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:45.561774    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:48.081022    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:07:53.083252    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:07:53.083473    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:07:53.106985    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:07:53.107107    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:07:53.123922    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:07:53.124016    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:07:53.137417    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:07:53.137500    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:07:53.148874    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:07:53.148951    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:07:53.159302    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:07:53.159372    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:07:53.169519    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:07:53.169586    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:07:53.179862    4804 logs.go:276] 0 containers: []
	W0731 15:07:53.179873    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:07:53.179931    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:07:53.190662    4804 logs.go:276] 0 containers: []
	W0731 15:07:53.190672    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:07:53.190680    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:07:53.190687    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:07:53.217270    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:07:53.217283    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:07:53.235074    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:07:53.235083    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:07:53.261325    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:07:53.261335    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:07:53.267115    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:07:53.267126    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:07:53.306091    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:07:53.306104    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:07:53.320199    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:07:53.320210    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:07:53.333923    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:07:53.333935    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:07:53.349218    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:07:53.349231    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:07:53.360610    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:07:53.360624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:07:53.371514    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:07:53.371526    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:07:53.410852    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:07:53.410862    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:07:53.429005    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:07:53.429015    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:07:53.443516    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:07:53.443528    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:07:53.455045    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:07:53.455058    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:07:55.967488    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:00.967644    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:00.967755    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:00.979505    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:00.979578    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:00.990150    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:00.990219    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:01.000756    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:01.000837    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:01.011444    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:01.011518    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:01.022554    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:01.022633    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:01.033167    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:01.033229    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:01.046934    4804 logs.go:276] 0 containers: []
	W0731 15:08:01.046946    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:01.047004    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:01.057308    4804 logs.go:276] 0 containers: []
	W0731 15:08:01.057318    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:01.057327    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:01.057335    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:01.079248    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:01.079258    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:01.120599    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:01.120612    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:01.125005    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:01.125014    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:01.160394    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:01.160408    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:01.173943    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:01.173957    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:01.185861    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:01.185873    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:01.203790    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:01.203801    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:01.215689    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:01.215700    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:01.235534    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:01.235543    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:01.249897    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:01.249910    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:01.265413    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:01.265427    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:01.278632    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:01.278649    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:01.302777    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:01.302784    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:01.324774    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:01.324788    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:03.838313    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:08.839798    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:08.840332    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:08.881685    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:08.881837    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:08.903583    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:08.903723    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:08.919353    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:08.919427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:08.933513    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:08.933585    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:08.944296    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:08.944368    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:08.955444    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:08.955506    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:08.965426    4804 logs.go:276] 0 containers: []
	W0731 15:08:08.965440    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:08.965508    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:08.975754    4804 logs.go:276] 0 containers: []
	W0731 15:08:08.975766    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:08.975773    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:08.975779    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:09.015540    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:09.015550    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:09.053667    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:09.053680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:09.067503    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:09.067516    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:09.081455    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:09.081466    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:09.103285    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:09.103296    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:09.121213    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:09.121224    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:09.132584    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:09.132597    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:09.149337    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:09.149346    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:09.163466    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:09.163479    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:09.180293    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:09.180304    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:09.192044    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:09.192055    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:09.196570    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:09.196575    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:09.214331    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:09.214340    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:09.226197    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:09.226210    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:11.754062    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:16.756669    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:16.756883    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:16.779507    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:16.779610    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:16.795068    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:16.795141    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:16.808038    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:16.808104    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:16.819200    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:16.819264    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:16.829802    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:16.829875    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:16.839914    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:16.839976    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:16.850602    4804 logs.go:276] 0 containers: []
	W0731 15:08:16.850615    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:16.850677    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:16.860412    4804 logs.go:276] 0 containers: []
	W0731 15:08:16.860424    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:16.860431    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:16.860439    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:16.864711    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:16.864717    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:16.898507    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:16.898520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:16.909957    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:16.909968    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:16.927375    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:16.927387    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:16.951533    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:16.951541    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:16.967575    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:16.967584    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:16.979203    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:16.979215    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:16.993290    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:16.993299    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:17.013035    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:17.013044    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:17.026780    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:17.026790    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:17.038389    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:17.038398    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:17.077102    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:17.077113    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:17.091843    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:17.091852    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:17.107256    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:17.107269    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:19.621143    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:24.621702    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:24.621860    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:24.634304    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:24.634382    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:24.645366    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:24.645437    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:24.655608    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:24.655675    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:24.670651    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:24.670721    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:24.681534    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:24.681601    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:24.692325    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:24.692393    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:24.702789    4804 logs.go:276] 0 containers: []
	W0731 15:08:24.702801    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:24.702858    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:24.713677    4804 logs.go:276] 0 containers: []
	W0731 15:08:24.713687    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:24.713695    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:24.713700    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:24.724317    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:24.724330    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:24.761914    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:24.761925    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:24.776041    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:24.776052    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:24.800302    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:24.800313    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:24.814030    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:24.814040    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:24.833938    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:24.833948    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:24.857378    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:24.857387    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:24.896794    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:24.896804    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:24.901348    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:24.901354    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:24.913943    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:24.913954    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:24.938227    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:24.938240    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:24.957232    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:24.957244    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:24.975219    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:24.975231    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:24.986763    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:24.986776    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:27.501612    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:32.502864    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:32.503058    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:32.538229    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:32.538311    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:32.558277    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:32.558342    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:32.568603    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:32.568664    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:32.579703    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:32.579784    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:32.590362    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:32.590429    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:32.600891    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:32.600963    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:32.610864    4804 logs.go:276] 0 containers: []
	W0731 15:08:32.610876    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:32.610926    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:32.621206    4804 logs.go:276] 0 containers: []
	W0731 15:08:32.621217    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:32.621225    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:32.621231    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:32.625450    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:32.625458    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:32.639392    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:32.639405    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:32.651112    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:32.651121    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:32.687615    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:32.687625    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:32.708721    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:32.708736    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:32.726535    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:32.726550    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:32.741627    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:32.741642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:32.753500    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:32.753513    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:32.765136    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:32.765145    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:32.776856    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:32.776867    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:32.818293    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:32.818301    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:32.831463    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:32.831473    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:32.851177    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:32.851187    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:32.870555    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:32.870566    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:35.397260    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:40.399670    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:40.400075    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:40.440698    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:40.440833    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:40.465035    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:40.465150    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:40.480059    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:40.480136    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:40.494558    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:40.494623    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:40.513468    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:40.513544    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:40.525953    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:40.526020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:40.536393    4804 logs.go:276] 0 containers: []
	W0731 15:08:40.536402    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:40.536453    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:40.546834    4804 logs.go:276] 0 containers: []
	W0731 15:08:40.546847    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:40.546856    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:40.546863    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:40.558192    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:40.558207    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:40.570142    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:40.570155    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:40.581224    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:40.581234    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:40.596336    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:40.596348    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:40.610418    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:40.610431    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:40.630761    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:40.630772    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:40.642113    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:40.642123    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:40.657918    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:40.657928    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:40.672855    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:40.672865    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:40.711131    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:40.711138    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:40.745242    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:40.745255    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:40.764367    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:40.764378    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:40.788797    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:40.788805    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:40.793039    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:40.793046    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:43.315430    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:48.317725    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:48.317914    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:48.329499    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:48.329578    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:48.340182    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:48.340262    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:48.351067    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:48.351136    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:48.362750    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:48.362819    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:48.373418    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:48.373477    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:48.383976    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:48.384037    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:48.393724    4804 logs.go:276] 0 containers: []
	W0731 15:08:48.393734    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:48.393795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:48.404232    4804 logs.go:276] 0 containers: []
	W0731 15:08:48.404243    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:48.404252    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:48.404257    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:48.421899    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:48.421909    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:48.433392    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:48.433406    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:48.445665    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:48.445680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:48.461155    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:48.461167    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:48.472899    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:48.472908    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:48.497697    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:48.497705    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:48.512308    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:48.512319    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:48.534569    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:48.534579    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:48.549060    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:48.549070    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:48.562762    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:48.562772    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:48.604941    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:48.604949    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:48.644895    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:48.644906    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:48.659046    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:48.659057    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:48.670686    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:48.670698    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:51.177732    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:56.180036    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:56.180465    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:56.219789    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:56.219918    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:56.241564    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:56.241654    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:56.257174    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:56.257254    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:56.272232    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:56.272303    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:56.288669    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:56.288732    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:56.299472    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:56.299531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:56.309865    4804 logs.go:276] 0 containers: []
	W0731 15:08:56.309883    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:56.309946    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:56.320108    4804 logs.go:276] 0 containers: []
	W0731 15:08:56.320119    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:56.320126    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:56.320131    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:56.332748    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:56.332759    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:56.358339    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:56.358358    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:56.399036    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:56.399047    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:56.403577    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:56.403585    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:56.417353    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:56.417365    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:56.439691    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:56.439702    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:56.453915    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:56.453928    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:56.474147    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:56.474156    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:56.492451    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:56.492463    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:56.504787    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:56.504800    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:56.522110    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:56.522122    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:56.563809    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:56.563824    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:56.578454    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:56.578468    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:56.596073    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:56.596085    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:59.110654    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:04.111265    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:04.111353    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:04.123295    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:04.123372    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:04.134723    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:04.134795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:04.154703    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:04.154774    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:04.167063    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:04.167143    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:04.178824    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:04.178894    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:04.197642    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:04.197714    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:04.209291    4804 logs.go:276] 0 containers: []
	W0731 15:09:04.209302    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:04.209359    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:04.220874    4804 logs.go:276] 0 containers: []
	W0731 15:09:04.220884    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:04.220892    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:04.220898    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:04.245580    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:04.245594    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:04.259535    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:04.259547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:04.303316    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:04.303332    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:04.316455    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:04.316467    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:04.328742    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:04.328758    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:04.350684    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:04.350698    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:04.366493    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:04.366511    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:04.383869    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:04.383882    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:04.397839    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:04.397854    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:04.422451    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:04.422466    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:04.427544    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:04.427558    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:04.465970    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:04.465982    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:04.481491    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:04.481503    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:04.496909    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:04.496922    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:07.014258    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:12.016429    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:12.016662    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:12.038320    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:12.038427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:12.054559    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:12.054645    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:12.066897    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:12.066970    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:12.084164    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:12.084241    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:12.095077    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:12.095144    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:12.105744    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:12.105809    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:12.116081    4804 logs.go:276] 0 containers: []
	W0731 15:09:12.116092    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:12.116160    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:12.126596    4804 logs.go:276] 0 containers: []
	W0731 15:09:12.126609    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:12.126617    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:12.126621    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:12.167812    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:12.167823    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:12.171923    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:12.171933    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:12.192011    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:12.192022    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:12.211269    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:12.211280    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:12.222998    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:12.223016    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:12.238735    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:12.238747    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:12.252818    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:12.252827    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:12.264141    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:12.264150    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:12.278920    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:12.278933    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:12.290365    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:12.290379    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:12.302248    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:12.302262    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:12.338247    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:12.338259    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:12.356409    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:12.356420    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:12.371063    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:12.371072    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:14.894163    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:19.896338    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:19.896505    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:19.912406    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:19.912483    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:19.922980    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:19.923049    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:19.933975    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:19.934041    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:19.951791    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:19.951866    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:19.962375    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:19.962449    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:19.973424    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:19.973490    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:19.983468    4804 logs.go:276] 0 containers: []
	W0731 15:09:19.983479    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:19.983531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:19.993886    4804 logs.go:276] 0 containers: []
	W0731 15:09:19.993899    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:19.993907    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:19.993913    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:20.030468    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:20.030479    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:20.044425    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:20.044439    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:20.064048    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:20.064060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:20.078463    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:20.078473    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:20.094198    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:20.094210    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:20.112600    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:20.112613    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:20.153664    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:20.153672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:20.165026    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:20.165038    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:20.175974    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:20.175985    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:20.198665    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:20.198678    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:20.209946    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:20.209962    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:20.214737    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:20.214746    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:20.228985    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:20.228997    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:20.242399    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:20.242413    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:22.756137    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:27.758355    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:27.758497    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:27.769354    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:27.769427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:27.781688    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:27.781753    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:27.791693    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:27.791758    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:27.805189    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:27.805267    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:27.817672    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:27.817747    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:27.828984    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:27.829051    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:27.856284    4804 logs.go:276] 0 containers: []
	W0731 15:09:27.856297    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:27.856361    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:27.866792    4804 logs.go:276] 0 containers: []
	W0731 15:09:27.866804    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:27.866811    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:27.866817    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:27.885128    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:27.885138    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:27.898523    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:27.898532    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:27.909943    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:27.909959    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:27.928990    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:27.929001    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:27.940226    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:27.940238    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:27.962540    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:27.962547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:28.001568    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:28.001580    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:28.016463    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:28.016474    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:28.028247    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:28.028256    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:28.039924    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:28.039936    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:28.054806    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:28.054816    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:28.069551    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:28.069562    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:28.089246    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:28.089255    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:28.093901    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:28.093909    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:30.631112    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:35.633499    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:35.633675    4804 kubeadm.go:597] duration metric: took 4m3.797122s to restartPrimaryControlPlane
	W0731 15:09:35.633801    4804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 15:09:35.633851    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 15:09:36.594893    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 15:09:36.600099    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:09:36.602884    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:09:36.605730    4804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:09:36.605737    4804 kubeadm.go:157] found existing configuration files:
	
	I0731 15:09:36.605758    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I0731 15:09:36.608138    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:09:36.608163    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:09:36.610762    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I0731 15:09:36.613881    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:09:36.613904    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:09:36.616393    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I0731 15:09:36.619019    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:09:36.619046    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:09:36.622158    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I0731 15:09:36.624615    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:09:36.624636    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:09:36.627294    4804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 15:09:36.645229    4804 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 15:09:36.645260    4804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 15:09:36.691357    4804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 15:09:36.691417    4804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 15:09:36.691506    4804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 15:09:36.740888    4804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 15:09:36.745851    4804 out.go:204]   - Generating certificates and keys ...
	I0731 15:09:36.745888    4804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 15:09:36.745927    4804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 15:09:36.745973    4804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 15:09:36.746012    4804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 15:09:36.746059    4804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 15:09:36.746093    4804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 15:09:36.746129    4804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 15:09:36.746171    4804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 15:09:36.746215    4804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 15:09:36.746262    4804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 15:09:36.746287    4804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 15:09:36.746317    4804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 15:09:36.844694    4804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 15:09:36.972600    4804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 15:09:37.095955    4804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 15:09:37.173266    4804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 15:09:37.202734    4804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 15:09:37.203366    4804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 15:09:37.203392    4804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 15:09:37.280899    4804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 15:09:37.285054    4804 out.go:204]   - Booting up control plane ...
	I0731 15:09:37.285104    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 15:09:37.285146    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 15:09:37.285179    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 15:09:37.285217    4804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 15:09:37.285303    4804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 15:09:41.789597    4804 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505693 seconds
	I0731 15:09:41.789904    4804 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 15:09:41.797581    4804 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 15:09:42.310869    4804 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 15:09:42.310985    4804 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-683000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 15:09:42.816152    4804 kubeadm.go:310] [bootstrap-token] Using token: svmwkp.h1lf5uy1wworw3a0
	I0731 15:09:42.822438    4804 out.go:204]   - Configuring RBAC rules ...
	I0731 15:09:42.822506    4804 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 15:09:42.822554    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 15:09:42.824612    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 15:09:42.826131    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 15:09:42.826804    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 15:09:42.827675    4804 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 15:09:42.830783    4804 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 15:09:42.999322    4804 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 15:09:43.221071    4804 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 15:09:43.221539    4804 kubeadm.go:310] 
	I0731 15:09:43.221571    4804 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 15:09:43.221577    4804 kubeadm.go:310] 
	I0731 15:09:43.221612    4804 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 15:09:43.221616    4804 kubeadm.go:310] 
	I0731 15:09:43.221630    4804 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 15:09:43.221660    4804 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 15:09:43.221688    4804 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 15:09:43.221692    4804 kubeadm.go:310] 
	I0731 15:09:43.221720    4804 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 15:09:43.221723    4804 kubeadm.go:310] 
	I0731 15:09:43.221757    4804 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 15:09:43.221763    4804 kubeadm.go:310] 
	I0731 15:09:43.221803    4804 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 15:09:43.221846    4804 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 15:09:43.221888    4804 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 15:09:43.221893    4804 kubeadm.go:310] 
	I0731 15:09:43.221933    4804 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 15:09:43.222034    4804 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 15:09:43.222057    4804 kubeadm.go:310] 
	I0731 15:09:43.222102    4804 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token svmwkp.h1lf5uy1wworw3a0 \
	I0731 15:09:43.222154    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a \
	I0731 15:09:43.222165    4804 kubeadm.go:310] 	--control-plane 
	I0731 15:09:43.222192    4804 kubeadm.go:310] 
	I0731 15:09:43.222256    4804 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 15:09:43.222262    4804 kubeadm.go:310] 
	I0731 15:09:43.222321    4804 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token svmwkp.h1lf5uy1wworw3a0 \
	I0731 15:09:43.222390    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a 
	I0731 15:09:43.222442    4804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 15:09:43.222450    4804 cni.go:84] Creating CNI manager for ""
	I0731 15:09:43.222458    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:09:43.228835    4804 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 15:09:43.235976    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 15:09:43.238909    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 15:09:43.243880    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 15:09:43.243934    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 15:09:43.243943    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-683000 minikube.k8s.io/updated_at=2024_07_31T15_09_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=running-upgrade-683000 minikube.k8s.io/primary=true
	I0731 15:09:43.284117    4804 kubeadm.go:1113] duration metric: took 40.223ms to wait for elevateKubeSystemPrivileges
	I0731 15:09:43.292582    4804 ops.go:34] apiserver oom_adj: -16
	I0731 15:09:43.292717    4804 kubeadm.go:394] duration metric: took 4m11.469595833s to StartCluster
	I0731 15:09:43.292730    4804 settings.go:142] acquiring lock: {Name:mk4ba9457258541473c3bcf6c2e4b75027bd146e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:43.292816    4804 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:09:43.293214    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:43.293394    4804 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:09:43.293401    4804 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 15:09:43.293440    4804 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-683000"
	I0731 15:09:43.293488    4804 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-683000"
	W0731 15:09:43.293492    4804 addons.go:243] addon storage-provisioner should already be in state true
	I0731 15:09:43.293487    4804 config.go:182] Loaded profile config "running-upgrade-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:09:43.293444    4804 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-683000"
	I0731 15:09:43.293512    4804 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-683000"
	I0731 15:09:43.293503    4804 host.go:66] Checking if "running-upgrade-683000" exists ...
	I0731 15:09:43.294382    4804 kapi.go:59] client config for running-upgrade-683000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024fc700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:09:43.294499    4804 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-683000"
	W0731 15:09:43.294503    4804 addons.go:243] addon default-storageclass should already be in state true
	I0731 15:09:43.294509    4804 host.go:66] Checking if "running-upgrade-683000" exists ...
	I0731 15:09:43.297942    4804 out.go:177] * Verifying Kubernetes components...
	I0731 15:09:43.298287    4804 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 15:09:43.302126    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 15:09:43.302134    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:09:43.304896    4804 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:43.307852    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:43.311915    4804 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:09:43.311921    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 15:09:43.311928    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:09:43.402886    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:09:43.408774    4804 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:09:43.408827    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:43.412901    4804 api_server.go:72] duration metric: took 119.49775ms to wait for apiserver process to appear ...
	I0731 15:09:43.412911    4804 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:09:43.412920    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:43.455760    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 15:09:43.467709    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:09:48.414968    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:48.415010    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:53.415314    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:53.415372    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:58.415659    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:58.415692    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:03.416061    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:03.416105    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:08.416651    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:08.416703    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:13.417434    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:13.417456    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 15:10:13.795396    4804 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 15:10:13.800253    4804 out.go:177] * Enabled addons: storage-provisioner
	I0731 15:10:13.807126    4804 addons.go:510] duration metric: took 30.514217042s for enable addons: enabled=[storage-provisioner]
	I0731 15:10:18.417948    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:18.417975    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:23.419067    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:23.419114    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:28.420525    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:28.420576    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:33.421324    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:33.421376    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:38.423381    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:38.423425    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:43.425637    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:43.425725    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:43.437260    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:43.437352    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:43.448013    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:43.448077    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:43.462503    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:43.462578    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:43.477758    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:43.477824    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:43.488946    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:43.489018    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:43.499064    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:43.499125    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:43.509929    4804 logs.go:276] 0 containers: []
	W0731 15:10:43.509940    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:43.509999    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:43.520505    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:43.520520    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:43.520526    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:43.532202    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:43.532212    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:43.544085    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:43.544099    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:43.554868    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:43.554883    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:43.579836    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:43.579843    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:43.614683    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:43.614690    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:43.618844    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:43.618849    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:10:43.632349    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:43.632365    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:43.644072    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:43.644082    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:43.655593    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:43.655607    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:43.691235    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:43.691250    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:43.709560    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:43.709570    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:43.725909    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:43.725925    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:46.248297    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:51.250861    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:51.251040    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:51.266868    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:51.266955    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:51.287523    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:51.287592    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:51.298072    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:51.298139    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:51.308569    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:51.308645    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:51.319983    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:51.320053    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:51.331266    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:51.331331    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:51.341309    4804 logs.go:276] 0 containers: []
	W0731 15:10:51.341319    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:51.341378    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:51.352101    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:51.352121    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:51.352126    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:51.366800    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:51.366810    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:51.381008    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:51.381019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:51.393205    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:51.393216    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:51.417288    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:51.417306    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:51.451921    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:51.451930    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:51.456140    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:51.456147    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:10:51.470024    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:51.470033    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:51.482131    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:51.482142    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:51.495606    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:51.495616    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:51.537226    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:51.537237    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:51.551408    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:51.551419    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:51.563671    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:51.563682    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:54.083723    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:59.086051    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:59.086187    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:59.096954    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:59.097032    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:59.107252    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:59.107326    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:59.117516    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:59.117579    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:59.128291    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:59.128363    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:59.138768    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:59.138842    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:59.149097    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:59.149172    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:59.158980    4804 logs.go:276] 0 containers: []
	W0731 15:10:59.158990    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:59.159051    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:59.170626    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:59.170642    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:59.170647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:59.184842    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:59.184850    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:59.197290    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:59.197300    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:59.212445    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:59.212455    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:59.233162    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:59.233174    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:59.258318    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:59.258326    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:59.269558    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:59.269569    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:59.307101    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:59.307110    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:59.311510    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:59.311520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:59.323416    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:59.323427    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:59.335112    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:59.335125    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:59.352766    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:59.352777    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:59.388623    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:59.388634    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:01.902775    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:06.905043    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:06.905182    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:06.918677    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:06.918746    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:06.929634    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:06.929704    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:06.939717    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:06.939787    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:06.952326    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:06.952394    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:06.962759    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:06.962821    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:06.973359    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:06.973426    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:06.983866    4804 logs.go:276] 0 containers: []
	W0731 15:11:06.983878    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:06.983937    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:06.998078    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:06.998093    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:06.998098    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:07.033630    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:07.033638    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:07.048132    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:07.048142    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:07.062078    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:07.062091    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:07.084293    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:07.084303    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:07.098819    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:07.098832    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:07.116553    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:07.116561    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:07.121538    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:07.121547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:07.156040    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:07.156051    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:07.167115    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:07.167124    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:07.178539    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:07.178549    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:07.189947    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:07.189958    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:07.214706    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:07.214714    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:09.732339    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:14.734102    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:14.734279    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:14.750256    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:14.750338    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:14.762475    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:14.762556    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:14.773369    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:14.773434    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:14.783987    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:14.784057    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:14.794254    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:14.794324    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:14.804676    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:14.804743    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:14.815451    4804 logs.go:276] 0 containers: []
	W0731 15:11:14.815462    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:14.815518    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:14.826173    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:14.826189    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:14.826194    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:14.830701    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:14.830708    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:14.844680    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:14.844690    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:14.865531    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:14.865544    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:14.877581    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:14.877593    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:14.892638    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:14.892649    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:14.904277    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:14.904288    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:14.917249    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:14.917263    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:14.942794    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:14.942805    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:14.980312    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:14.980320    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:15.016640    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:15.016651    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:15.031272    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:15.031283    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:15.048658    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:15.048668    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:17.562808    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:22.565047    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:22.565154    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:22.577494    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:22.577567    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:22.588055    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:22.588124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:22.598976    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:22.599044    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:22.610354    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:22.610424    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:22.621289    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:22.621352    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:22.631534    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:22.631608    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:22.641726    4804 logs.go:276] 0 containers: []
	W0731 15:11:22.641737    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:22.641796    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:22.653474    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:22.653490    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:22.653495    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:22.665609    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:22.665624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:22.682762    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:22.682770    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:22.694230    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:22.694241    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:22.717455    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:22.717462    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:22.755853    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:22.755864    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:22.760610    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:22.760616    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:22.795130    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:22.795143    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:22.808321    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:22.808337    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:22.825942    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:22.825953    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:22.838382    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:22.838397    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:22.853597    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:22.853608    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:22.868358    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:22.868370    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:25.382173    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:30.384400    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:30.384558    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:30.396034    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:30.396137    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:30.406596    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:30.406667    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:30.417306    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:30.417387    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:30.427988    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:30.428054    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:30.438380    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:30.438452    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:30.448881    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:30.448957    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:30.458753    4804 logs.go:276] 0 containers: []
	W0731 15:11:30.458766    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:30.458828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:30.469858    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:30.469871    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:30.469877    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:30.474283    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:30.474289    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:30.488332    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:30.488346    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:30.502316    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:30.502325    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:30.519618    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:30.519627    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:30.531063    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:30.531073    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:30.566866    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:30.566875    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:30.607579    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:30.607593    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:30.621592    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:30.621606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:30.633653    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:30.633663    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:30.645850    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:30.645861    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:30.657975    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:30.657987    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:30.683349    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:30.683364    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:33.210710    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:38.212880    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:38.212967    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:38.225415    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:38.225492    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:38.236646    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:38.236717    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:38.248088    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:38.248156    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:38.263445    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:38.263516    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:38.274913    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:38.274991    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:38.294874    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:38.294944    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:38.306336    4804 logs.go:276] 0 containers: []
	W0731 15:11:38.306349    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:38.306405    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:38.318465    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:38.318481    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:38.318486    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:38.330834    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:38.330844    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:38.345239    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:38.345250    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:38.356788    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:38.356799    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:38.373646    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:38.373657    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:38.409132    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:38.409142    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:38.444079    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:38.444091    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:38.458577    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:38.458589    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:38.470040    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:38.470050    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:38.481522    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:38.481532    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:38.492793    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:38.492804    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:38.497649    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:38.497655    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:38.511948    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:38.511958    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:41.035603    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:46.037828    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:46.037913    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:46.049161    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:46.049236    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:46.060191    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:46.060260    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:46.071516    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:46.071596    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:46.083088    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:46.083163    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:46.094432    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:46.094508    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:46.105378    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:46.105446    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:46.116010    4804 logs.go:276] 0 containers: []
	W0731 15:11:46.116023    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:46.116087    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:46.127751    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:46.127768    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:46.127774    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:46.133267    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:46.133277    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:46.146048    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:46.146060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:46.160632    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:46.160642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:46.175432    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:46.175441    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:46.188555    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:46.188569    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:46.225717    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:11:46.225730    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:11:46.237091    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:46.237102    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:46.249009    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:46.249019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:46.274951    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:46.274961    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:46.292011    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:46.292025    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:46.317409    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:46.317419    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:46.330215    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:46.330226    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:46.365613    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:46.365624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:46.384618    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:11:46.384628    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:11:48.897779    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:53.899951    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:53.900048    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:53.911522    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:53.911596    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:53.922872    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:53.922950    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:53.939678    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:53.939753    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:53.951055    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:53.951133    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:53.963131    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:53.963209    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:53.975270    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:53.975340    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:53.985908    4804 logs.go:276] 0 containers: []
	W0731 15:11:53.985919    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:53.985979    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:53.997006    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:53.997026    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:53.997032    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:54.034441    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:54.034456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:54.056363    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:54.056373    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:54.069234    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:54.069244    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:54.095687    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:54.095704    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:54.110584    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:11:54.110594    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:11:54.122309    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:11:54.122323    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:11:54.137131    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:54.137142    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:54.149331    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:54.149343    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:54.164205    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:54.164219    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:54.175972    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:54.175982    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:54.193522    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:54.193538    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:54.208781    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:54.208791    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:54.247511    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:54.247520    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:54.251673    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:54.251680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:56.769824    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:01.770961    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:01.771046    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:01.782518    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:01.782591    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:01.793694    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:01.793769    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:01.805650    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:01.805725    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:01.816737    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:01.816802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:01.828595    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:01.828668    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:01.840534    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:01.840603    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:01.853051    4804 logs.go:276] 0 containers: []
	W0731 15:12:01.853063    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:01.853124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:01.873095    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:01.873112    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:01.873118    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:01.889260    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:01.889272    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:01.901727    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:01.901739    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:01.917275    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:01.917287    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:01.933338    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:01.933346    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:01.946394    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:01.946404    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:01.950937    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:01.950952    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:01.964531    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:01.964540    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:02.001230    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:02.001244    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:02.015507    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:02.015518    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:02.028520    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:02.028534    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:02.043204    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:02.043215    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:02.065225    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:02.065235    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:02.078125    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:02.078137    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:02.103051    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:02.103070    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:04.641305    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:09.643678    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:09.643907    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:09.659346    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:09.659434    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:09.672233    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:09.672314    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:09.684040    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:09.684124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:09.695892    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:09.695971    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:09.707497    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:09.707575    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:09.719151    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:09.719225    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:09.729989    4804 logs.go:276] 0 containers: []
	W0731 15:12:09.730001    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:09.730070    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:09.741445    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:09.741480    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:09.741489    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:09.767134    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:09.767154    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:09.782486    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:09.782503    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:09.800651    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:09.800663    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:09.813476    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:09.813487    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:09.826183    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:09.826194    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:09.838317    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:09.838330    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:09.853733    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:09.853746    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:09.867336    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:09.867350    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:09.880764    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:09.880776    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:09.894579    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:09.894590    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:09.913019    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:09.913037    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:09.964124    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:09.964139    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:09.969050    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:09.969062    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:09.985629    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:09.985659    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:12.527880    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:17.528983    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:17.529213    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:17.547451    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:17.547546    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:17.561278    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:17.561349    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:17.573826    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:17.573905    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:17.584244    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:17.584310    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:17.594370    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:17.594435    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:17.606020    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:17.606095    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:17.616467    4804 logs.go:276] 0 containers: []
	W0731 15:12:17.616479    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:17.616538    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:17.627839    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:17.627855    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:17.627861    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:17.633171    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:17.633180    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:17.647186    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:17.647198    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:17.660313    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:17.660325    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:17.674029    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:17.674043    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:17.686872    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:17.686883    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:17.705837    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:17.705850    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:17.720690    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:17.720702    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:17.757504    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:17.757523    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:17.772504    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:17.772520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:17.785447    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:17.785459    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:17.822389    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:17.822399    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:17.838321    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:17.838330    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:17.854374    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:17.854385    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:17.880288    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:17.880308    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:20.394899    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:25.395336    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:25.395586    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:25.416062    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:25.416156    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:25.430815    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:25.430882    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:25.442596    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:25.442672    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:25.453049    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:25.453115    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:25.470517    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:25.470587    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:25.480862    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:25.480928    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:25.491206    4804 logs.go:276] 0 containers: []
	W0731 15:12:25.491217    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:25.491270    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:25.502062    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:25.502078    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:25.502084    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:25.513962    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:25.513973    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:25.538595    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:25.538614    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:25.576636    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:25.576647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:25.591792    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:25.591807    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:25.607277    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:25.607293    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:25.619795    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:25.619810    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:25.633030    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:25.633041    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:25.670946    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:25.670961    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:25.675842    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:25.675855    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:25.691133    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:25.691144    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:25.709785    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:25.709799    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:25.722593    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:25.722611    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:25.735210    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:25.735223    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:25.747418    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:25.747429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:28.260506    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:33.262749    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:33.262939    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:33.279782    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:33.279861    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:33.293575    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:33.293649    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:33.306729    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:33.306801    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:33.317811    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:33.317877    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:33.328567    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:33.328641    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:33.348719    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:33.348792    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:33.359281    4804 logs.go:276] 0 containers: []
	W0731 15:12:33.359292    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:33.359353    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:33.369625    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:33.369643    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:33.369649    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:33.381576    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:33.381587    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:33.396232    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:33.396242    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:33.414323    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:33.414336    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:33.450542    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:33.450555    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:33.455403    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:33.455417    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:33.467707    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:33.467718    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:33.481333    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:33.481344    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:33.496426    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:33.496438    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:33.509175    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:33.509183    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:33.534509    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:33.534519    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:33.546825    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:33.546840    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:33.583283    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:33.583294    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:33.602737    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:33.602749    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:33.616449    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:33.616465    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:36.131473    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:41.133625    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:41.133768    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:41.170622    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:41.170702    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:41.192559    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:41.192635    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:41.206978    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:41.207050    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:41.218975    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:41.219048    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:41.229447    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:41.229524    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:41.243946    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:41.244016    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:41.254702    4804 logs.go:276] 0 containers: []
	W0731 15:12:41.254714    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:41.254769    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:41.267637    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:41.267654    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:41.267660    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:41.278979    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:41.278990    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:41.296764    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:41.296775    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:41.308943    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:41.308956    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:41.345349    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:41.345359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:41.359947    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:41.359962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:41.373551    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:41.373563    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:41.386729    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:41.386743    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:41.428436    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:41.428452    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:41.443760    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:41.443772    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:41.456634    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:41.456646    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:41.469703    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:41.469716    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:41.489865    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:41.489881    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:41.502437    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:41.502449    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:41.506907    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:41.506915    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:44.034417    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:49.036779    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:49.037222    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:49.080264    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:49.080393    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:49.099680    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:49.099776    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:49.114150    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:49.114227    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:49.126177    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:49.126250    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:49.141871    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:49.141948    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:49.153511    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:49.153584    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:49.163974    4804 logs.go:276] 0 containers: []
	W0731 15:12:49.163984    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:49.164040    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:49.177582    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:49.177600    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:49.177606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:49.190072    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:49.190083    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:49.208522    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:49.208533    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:49.245228    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:49.245240    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:49.287059    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:49.287070    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:49.302268    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:49.302280    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:49.315068    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:49.315080    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:49.327119    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:49.327132    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:49.350860    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:49.350876    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:49.377854    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:49.377871    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:49.390911    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:49.390923    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:49.395792    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:49.395803    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:49.416012    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:49.416024    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:49.428610    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:49.428621    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:49.441774    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:49.441788    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:51.956317    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:56.958659    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:56.958949    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:56.977179    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:56.977266    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:56.990464    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:56.990528    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:57.001308    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:57.001381    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:57.017306    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:57.017375    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:57.031922    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:57.031990    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:57.041904    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:57.041971    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:57.051953    4804 logs.go:276] 0 containers: []
	W0731 15:12:57.051964    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:57.052014    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:57.062523    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:57.062540    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:57.062545    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:57.081723    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:57.081736    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:57.093632    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:57.093646    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:57.131504    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:57.131514    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:57.135859    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:57.135868    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:57.147347    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:57.147361    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:57.165613    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:57.165624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:57.178164    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:57.178177    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:57.190860    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:57.190875    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:57.238246    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:57.238263    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:57.254788    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:57.254806    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:57.276564    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:57.276577    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:57.292087    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:57.292097    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:57.305277    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:57.305288    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:57.330579    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:57.330588    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:59.844794    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:04.847149    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:04.847399    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:04.872098    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:04.872197    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:04.888115    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:04.888200    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:04.901322    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:04.901403    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:04.914082    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:04.914154    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:04.924723    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:04.924795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:04.935589    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:04.935661    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:04.945772    4804 logs.go:276] 0 containers: []
	W0731 15:13:04.945783    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:04.945842    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:04.956563    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:04.956579    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:04.956584    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:04.972682    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:04.972696    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:04.983874    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:04.983885    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:05.007586    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:05.007594    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:05.021851    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:05.021862    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:05.033059    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:05.033068    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:05.050801    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:05.050813    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:05.062581    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:05.062592    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:05.075330    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:05.075342    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:05.090333    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:05.090343    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:05.103800    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:05.103818    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:05.116742    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:05.116752    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:05.137591    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:05.137609    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:05.177041    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:05.177061    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:05.182124    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:05.182133    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:07.726182    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:12.728614    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:12.728876    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:12.754113    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:12.754240    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:12.770542    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:12.770618    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:12.784274    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:12.784346    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:12.795547    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:12.795616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:12.806451    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:12.806519    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:12.822654    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:12.822719    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:12.832829    4804 logs.go:276] 0 containers: []
	W0731 15:13:12.832842    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:12.832906    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:12.843186    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:12.843204    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:12.843210    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:12.855015    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:12.855025    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:12.869901    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:12.869911    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:12.881706    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:12.881720    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:12.899587    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:12.899597    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:12.924404    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:12.924411    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:12.937903    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:12.937913    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:12.952128    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:12.952141    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:12.963764    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:12.963775    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:12.968982    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:12.968988    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:12.981047    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:12.981059    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:12.998840    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:12.998850    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:13.036885    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:13.036894    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:13.074146    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:13.074162    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:13.086690    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:13.086702    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:15.601448    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:20.603572    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:20.603698    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:20.615724    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:20.615807    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:20.628063    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:20.628143    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:20.642644    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:20.642726    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:20.654102    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:20.654168    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:20.665587    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:20.665657    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:20.675683    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:20.675746    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:20.686976    4804 logs.go:276] 0 containers: []
	W0731 15:13:20.686987    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:20.687052    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:20.698339    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:20.698354    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:20.698359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:20.717291    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:20.717302    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:20.728670    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:20.728682    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:20.741088    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:20.741097    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:20.753680    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:20.753692    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:20.793711    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:20.793728    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:20.806817    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:20.806826    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:20.819010    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:20.819021    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:20.831527    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:20.831539    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:20.843779    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:20.843790    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:20.848602    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:20.848613    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:20.885110    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:20.885123    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:20.900272    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:20.900282    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:20.919003    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:20.919017    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:20.934871    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:20.934887    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:23.462001    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:28.464089    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:28.464200    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:28.475371    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:28.475453    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:28.486393    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:28.486465    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:28.497691    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:28.497759    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:28.508869    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:28.508939    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:28.519861    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:28.519924    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:28.531092    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:28.531166    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:28.542523    4804 logs.go:276] 0 containers: []
	W0731 15:13:28.542533    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:28.542597    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:28.555313    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:28.555330    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:28.555335    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:28.569662    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:28.569674    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:28.580846    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:28.580856    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:28.598679    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:28.598690    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:28.610625    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:28.610635    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:28.627085    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:28.627095    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:28.639416    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:28.639427    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:28.644127    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:28.644136    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:28.684366    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:28.684381    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:28.699786    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:28.699800    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:28.711616    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:28.711630    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:28.723310    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:28.723324    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:28.759698    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:28.759706    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:28.771362    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:28.771376    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:28.782550    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:28.782561    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:31.308891    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:36.309604    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:36.309819    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:36.326319    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:36.326407    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:36.338640    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:36.338718    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:36.350582    4804 logs.go:276] 4 containers: [e1c601a4adb4 57c66d79a419 eacaa92db7e0 75305c810552]
	I0731 15:13:36.350659    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:36.360698    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:36.360770    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:36.371109    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:36.371174    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:36.382042    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:36.382114    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:36.391996    4804 logs.go:276] 0 containers: []
	W0731 15:13:36.392006    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:36.392087    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:36.402944    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:36.402962    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:36.402967    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:36.414468    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:36.414478    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:36.452158    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:36.452173    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:36.456971    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:36.456979    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:36.468368    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:36.468379    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:36.482523    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:36.482533    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:36.497473    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:36.497490    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:36.516252    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:36.516263    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:36.528496    4804 logs.go:123] Gathering logs for coredns [e1c601a4adb4] ...
	I0731 15:13:36.528507    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c601a4adb4"
	I0731 15:13:36.539938    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:36.539955    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:36.555728    4804 logs.go:123] Gathering logs for coredns [57c66d79a419] ...
	I0731 15:13:36.555738    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57c66d79a419"
	I0731 15:13:36.569914    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:36.569926    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:36.581163    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:36.581174    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:36.604480    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:36.604488    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:36.640257    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:36.640268    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:39.156293    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:44.156696    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:44.160161    4804 out.go:177] 
	W0731 15:13:44.164156    4804 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 15:13:44.164162    4804 out.go:239] * 
	* 
	W0731 15:13:44.164663    4804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:13:44.179083    4804 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-683000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-31 15:13:44.268811 -0700 PDT m=+2855.716369292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-683000 -n running-upgrade-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-683000 -n running-upgrade-683000: exit status 2 (15.672710375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-683000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-762000          | force-systemd-flag-762000 | jenkins | v1.33.1 | 31 Jul 24 15:03 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-397000              | force-systemd-env-397000  | jenkins | v1.33.1 | 31 Jul 24 15:03 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-397000           | force-systemd-env-397000  | jenkins | v1.33.1 | 31 Jul 24 15:03 PDT | 31 Jul 24 15:03 PDT |
	| start   | -p docker-flags-700000                | docker-flags-700000       | jenkins | v1.33.1 | 31 Jul 24 15:03 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-762000             | force-systemd-flag-762000 | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-762000          | force-systemd-flag-762000 | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT | 31 Jul 24 15:04 PDT |
	| start   | -p cert-expiration-885000             | cert-expiration-885000    | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-700000 ssh               | docker-flags-700000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-700000 ssh               | docker-flags-700000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-700000                | docker-flags-700000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT | 31 Jul 24 15:04 PDT |
	| start   | -p cert-options-991000                | cert-options-991000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-991000 ssh               | cert-options-991000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-991000 -- sudo        | cert-options-991000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-991000                | cert-options-991000       | jenkins | v1.33.1 | 31 Jul 24 15:04 PDT | 31 Jul 24 15:04 PDT |
	| start   | -p running-upgrade-683000             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 15:04 PDT | 31 Jul 24 15:05 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-683000             | running-upgrade-683000    | jenkins | v1.33.1 | 31 Jul 24 15:05 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-885000             | cert-expiration-885000    | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-885000             | cert-expiration-885000    | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT | 31 Jul 24 15:07 PDT |
	| start   | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT | 31 Jul 24 15:07 PDT |
	| start   | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.33.1 | 31 Jul 24 15:07 PDT | 31 Jul 24 15:07 PDT |
	| start   | -p stopped-upgrade-609000             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 15:07 PDT | 31 Jul 24 15:08 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-609000 stop           | minikube                  | jenkins | v1.26.0 | 31 Jul 24 15:08 PDT | 31 Jul 24 15:08 PDT |
	| start   | -p stopped-upgrade-609000             | stopped-upgrade-609000    | jenkins | v1.33.1 | 31 Jul 24 15:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 15:08:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 15:08:39.650157    4988 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:08:39.650340    4988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:08:39.650344    4988 out.go:304] Setting ErrFile to fd 2...
	I0731 15:08:39.650347    4988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:08:39.650507    4988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:08:39.651620    4988 out.go:298] Setting JSON to false
	I0731 15:08:39.670714    4988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4083,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:08:39.670793    4988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:08:39.675607    4988 out.go:177] * [stopped-upgrade-609000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:08:39.681569    4988 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:08:39.681627    4988 notify.go:220] Checking for updates...
	I0731 15:08:39.687466    4988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:08:39.690519    4988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:08:39.691770    4988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:08:39.694567    4988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:08:39.697556    4988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:08:39.700787    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:08:39.703419    4988 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 15:08:39.706519    4988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:08:39.710504    4988 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:08:39.717551    4988 start.go:297] selected driver: qemu2
	I0731 15:08:39.717559    4988 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:08:39.717641    4988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:08:39.720408    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:08:39.720422    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:08:39.720442    4988 start.go:340] cluster config:
	{Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:08:39.720494    4988 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:08:39.728476    4988 out.go:177] * Starting "stopped-upgrade-609000" primary control-plane node in "stopped-upgrade-609000" cluster
	I0731 15:08:39.732528    4988 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:08:39.732544    4988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 15:08:39.732555    4988 cache.go:56] Caching tarball of preloaded images
	I0731 15:08:39.732606    4988 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:08:39.732613    4988 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 15:08:39.732668    4988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/config.json ...
	I0731 15:08:39.733060    4988 start.go:360] acquireMachinesLock for stopped-upgrade-609000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:08:39.733092    4988 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "stopped-upgrade-609000"
	I0731 15:08:39.733101    4988 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:08:39.733105    4988 fix.go:54] fixHost starting: 
	I0731 15:08:39.733204    4988 fix.go:112] recreateIfNeeded on stopped-upgrade-609000: state=Stopped err=<nil>
	W0731 15:08:39.733214    4988 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:08:39.741531    4988 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-609000" ...
	I0731 15:08:40.399670    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:40.400075    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:40.440698    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:40.440833    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:40.465035    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:40.465150    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:40.480059    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:40.480136    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:40.494558    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:40.494623    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:40.513468    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:40.513544    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:40.525953    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:40.526020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:40.536393    4804 logs.go:276] 0 containers: []
	W0731 15:08:40.536402    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:40.536453    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:40.546834    4804 logs.go:276] 0 containers: []
	W0731 15:08:40.546847    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:40.546856    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:40.546863    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:40.558192    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:40.558207    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:40.570142    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:40.570155    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:40.581224    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:40.581234    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:40.596336    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:40.596348    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:40.610418    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:40.610431    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:40.630761    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:40.630772    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:40.642113    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:40.642123    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:40.657918    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:40.657928    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:40.672855    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:40.672865    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:40.711131    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:40.711138    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:40.745242    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:40.745255    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:40.764367    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:40.764378    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:40.788797    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:40.788805    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:40.793039    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:40.793046    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:43.315430    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:39.745445    4988 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:08:39.745503    4988 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-609000 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/disk.qcow2
	I0731 15:08:39.790435    4988 main.go:141] libmachine: STDOUT: 
	I0731 15:08:39.790464    4988 main.go:141] libmachine: STDERR: 
	I0731 15:08:39.790472    4988 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0731 15:08:48.317725    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:48.317914    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:48.329499    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:48.329578    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:48.340182    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:48.340262    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:48.351067    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:48.351136    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:48.362750    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:48.362819    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:48.373418    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:48.373477    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:48.383976    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:48.384037    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:48.393724    4804 logs.go:276] 0 containers: []
	W0731 15:08:48.393734    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:48.393795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:48.404232    4804 logs.go:276] 0 containers: []
	W0731 15:08:48.404243    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:48.404252    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:48.404257    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:48.421899    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:48.421909    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:48.433392    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:48.433406    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:08:48.445665    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:48.445680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:48.461155    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:48.461167    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:48.472899    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:48.472908    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:48.497697    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:48.497705    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:48.512308    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:48.512319    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:48.534569    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:48.534579    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:48.549060    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:48.549070    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:48.562762    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:48.562772    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:48.604941    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:48.604949    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:48.644895    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:48.644906    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:48.659046    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:48.659057    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:48.670686    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:48.670698    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:51.177732    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:08:56.180036    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:08:56.180465    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:08:56.219789    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:08:56.219918    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:08:56.241564    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:08:56.241654    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:08:56.257174    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:08:56.257254    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:08:56.272232    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:08:56.272303    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:08:56.288669    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:08:56.288732    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:08:56.299472    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:08:56.299531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:08:56.309865    4804 logs.go:276] 0 containers: []
	W0731 15:08:56.309883    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:08:56.309946    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:08:56.320108    4804 logs.go:276] 0 containers: []
	W0731 15:08:56.320119    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:08:56.320126    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:08:56.320131    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:08:56.332748    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:08:56.332759    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:08:56.358339    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:08:56.358358    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:08:56.399036    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:08:56.399047    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:08:56.403577    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:08:56.403585    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:08:56.417353    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:08:56.417365    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:08:56.439691    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:08:56.439702    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:08:56.453915    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:08:56.453928    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:08:56.474147    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:08:56.474156    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:08:56.492451    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:08:56.492463    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:08:56.504787    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:08:56.504800    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:08:56.522110    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:08:56.522122    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:08:56.563809    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:08:56.563824    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:08:56.578454    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:08:56.578468    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:08:56.596073    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:08:56.596085    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:00.178062    4988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/config.json ...
	I0731 15:09:00.178895    4988 machine.go:94] provisionDockerMachine start ...
	I0731 15:09:00.179074    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.179617    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.179633    4988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 15:09:00.272466    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 15:09:00.272494    4988 buildroot.go:166] provisioning hostname "stopped-upgrade-609000"
	I0731 15:09:00.272605    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.272814    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.272826    4988 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-609000 && echo "stopped-upgrade-609000" | sudo tee /etc/hostname
	I0731 15:09:00.354491    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-609000
	
	I0731 15:09:00.354591    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.354758    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.354769    4988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-609000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-609000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-609000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 15:09:00.429297    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 15:09:00.429311    4988 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1411/.minikube}
	I0731 15:09:00.429319    4988 buildroot.go:174] setting up certificates
	I0731 15:09:00.429326    4988 provision.go:84] configureAuth start
	I0731 15:09:00.429330    4988 provision.go:143] copyHostCerts
	I0731 15:09:00.429403    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem, removing ...
	I0731 15:09:00.429413    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem
	I0731 15:09:00.429565    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem (1078 bytes)
	I0731 15:09:00.429781    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem, removing ...
	I0731 15:09:00.429786    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem
	I0731 15:09:00.429846    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem (1123 bytes)
	I0731 15:09:00.429973    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem, removing ...
	I0731 15:09:00.429977    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem
	I0731 15:09:00.430030    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem (1679 bytes)
	I0731 15:09:00.430137    4988 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-609000 san=[127.0.0.1 localhost minikube stopped-upgrade-609000]
	I0731 15:09:00.511618    4988 provision.go:177] copyRemoteCerts
	I0731 15:09:00.511656    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 15:09:00.511664    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:00.549114    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 15:09:00.556880    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 15:09:00.564586    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 15:09:00.571290    4988 provision.go:87] duration metric: took 141.961916ms to configureAuth
	I0731 15:09:00.571299    4988 buildroot.go:189] setting minikube options for container-runtime
	I0731 15:09:00.571420    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:09:00.571453    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.571545    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.571550    4988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 15:09:00.639415    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 15:09:00.639424    4988 buildroot.go:70] root file system type: tmpfs
	I0731 15:09:00.639477    4988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 15:09:00.639520    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.639634    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.639669    4988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 15:09:00.711376    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 15:09:00.711451    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.711578    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.711589    4988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 15:09:01.052512    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 15:09:01.052524    4988 machine.go:97] duration metric: took 873.632375ms to provisionDockerMachine
	I0731 15:09:01.052532    4988 start.go:293] postStartSetup for "stopped-upgrade-609000" (driver="qemu2")
	I0731 15:09:01.052539    4988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 15:09:01.052589    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 15:09:01.052599    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:01.089543    4988 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 15:09:01.090846    4988 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 15:09:01.090854    4988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/addons for local assets ...
	I0731 15:09:01.090931    4988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/files for local assets ...
	I0731 15:09:01.091031    4988 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem -> 19132.pem in /etc/ssl/certs
	I0731 15:09:01.091130    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 15:09:01.093698    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:09:01.100995    4988 start.go:296] duration metric: took 48.458375ms for postStartSetup
	I0731 15:09:01.101011    4988 fix.go:56] duration metric: took 21.368247958s for fixHost
	I0731 15:09:01.101044    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:01.101149    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:01.101153    4988 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 15:09:01.167199    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463741.607963254
	
	I0731 15:09:01.167207    4988 fix.go:216] guest clock: 1722463741.607963254
	I0731 15:09:01.167212    4988 fix.go:229] Guest: 2024-07-31 15:09:01.607963254 -0700 PDT Remote: 2024-07-31 15:09:01.101012 -0700 PDT m=+21.482586667 (delta=506.951254ms)
	I0731 15:09:01.167224    4988 fix.go:200] guest clock delta is within tolerance: 506.951254ms
	I0731 15:09:01.167227    4988 start.go:83] releasing machines lock for "stopped-upgrade-609000", held for 21.434474042s
	I0731 15:09:01.167301    4988 ssh_runner.go:195] Run: cat /version.json
	I0731 15:09:01.167311    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:01.167301    4988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 15:09:01.167339    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	W0731 15:09:01.168043    4988 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0731 15:09:01.168066    4988 retry.go:31] will retry after 260.730151ms: dial tcp [::1]:50463: connect: connection refused
	W0731 15:09:01.479854    4988 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 15:09:01.480035    4988 ssh_runner.go:195] Run: systemctl --version
	I0731 15:09:01.483420    4988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 15:09:01.486168    4988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 15:09:01.486215    4988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 15:09:01.490946    4988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 15:09:01.497950    4988 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 15:09:01.497969    4988 start.go:495] detecting cgroup driver to use...
	I0731 15:09:01.498085    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:09:01.507664    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 15:09:01.511332    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 15:09:01.514725    4988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 15:09:01.514756    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 15:09:01.518125    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:09:01.521596    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 15:09:01.525026    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:09:01.528150    4988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 15:09:01.530842    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 15:09:01.533756    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 15:09:01.537003    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 15:09:01.540110    4988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 15:09:01.542625    4988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 15:09:01.545531    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:01.610786    4988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 15:09:01.620879    4988 start.go:495] detecting cgroup driver to use...
	I0731 15:09:01.620941    4988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 15:09:01.626013    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:09:01.635163    4988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 15:09:01.641194    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:09:01.645682    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 15:09:01.650224    4988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 15:09:01.715118    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 15:09:01.720719    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:09:01.726363    4988 ssh_runner.go:195] Run: which cri-dockerd
	I0731 15:09:01.727820    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 15:09:01.730305    4988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 15:09:01.735462    4988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 15:09:01.802564    4988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 15:09:01.865030    4988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 15:09:01.865099    4988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 15:09:01.870462    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:01.937324    4988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:09:03.075521    4988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.138194917s)
	I0731 15:09:03.075577    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 15:09:03.080664    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:09:03.085053    4988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 15:09:03.148833    4988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 15:09:03.208990    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:03.268681    4988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 15:09:03.274845    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:09:03.279283    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:03.341440    4988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 15:09:03.381745    4988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 15:09:03.381867    4988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 15:09:03.384464    4988 start.go:563] Will wait 60s for crictl version
	I0731 15:09:03.384510    4988 ssh_runner.go:195] Run: which crictl
	I0731 15:09:03.386549    4988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 15:09:03.401266    4988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 15:09:03.401329    4988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:09:03.417584    4988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:08:59.110654    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:03.438542    4988 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 15:09:03.438603    4988 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 15:09:03.439899    4988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 15:09:03.443552    4988 kubeadm.go:883] updating cluster {Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 15:09:03.443595    4988 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:09:03.443634    4988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:09:03.453732    4988 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:09:03.453739    4988 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:09:03.453783    4988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:09:03.457169    4988 ssh_runner.go:195] Run: which lz4
	I0731 15:09:03.458428    4988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 15:09:03.459778    4988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 15:09:03.459788    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 15:09:04.411896    4988 docker.go:649] duration metric: took 953.512666ms to copy over tarball
	I0731 15:09:04.411962    4988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 15:09:04.111265    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:04.111353    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:04.123295    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:04.123372    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:04.134723    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:04.134795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:04.154703    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:04.154774    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:04.167063    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:04.167143    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:04.178824    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:04.178894    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:04.197642    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:04.197714    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:04.209291    4804 logs.go:276] 0 containers: []
	W0731 15:09:04.209302    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:04.209359    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:04.220874    4804 logs.go:276] 0 containers: []
	W0731 15:09:04.220884    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:04.220892    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:04.220898    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:04.245580    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:04.245594    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:04.259535    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:04.259547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:04.303316    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:04.303332    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:04.316455    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:04.316467    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:04.328742    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:04.328758    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:04.350684    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:04.350698    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:04.366493    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:04.366511    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:04.383869    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:04.383882    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:04.397839    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:04.397854    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:04.422451    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:04.422466    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:04.427544    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:04.427558    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:04.465970    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:04.465982    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:04.481491    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:04.481503    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:04.496909    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:04.496922    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:07.014258    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:05.597889    4988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185932875s)
	I0731 15:09:05.597903    4988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 15:09:05.614171    4988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:09:05.617631    4988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 15:09:05.622812    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:05.685025    4988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:09:07.313893    4988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.628878833s)
	I0731 15:09:07.313981    4988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:09:07.327802    4988 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:09:07.327812    4988 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:09:07.327818    4988 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 15:09:07.333227    4988 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.335296    4988 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.336881    4988 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.336882    4988 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.338453    4988 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.338471    4988 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.339943    4988 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.340133    4988 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.341323    4988 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.341371    4988 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.342330    4988 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 15:09:07.342516    4988 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.343749    4988 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.343854    4988 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.344783    4988 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 15:09:07.345349    4988 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.771686    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.773786    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.788545    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.790445    4988 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 15:09:07.790467    4988 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.790504    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.793363    4988 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 15:09:07.793384    4988 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.793431    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.802205    4988 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 15:09:07.802230    4988 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.802291    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.806412    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 15:09:07.810408    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 15:09:07.815196    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 15:09:07.816481    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 15:09:07.824748    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.827412    4988 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 15:09:07.827430    4988 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 15:09:07.827465    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 15:09:07.839161    4988 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 15:09:07.839174    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 15:09:07.839181    4988 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.839232    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.839280    4988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 15:09:07.841166    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.851785    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 15:09:07.851892    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 15:09:07.851911    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 15:09:07.852186    4988 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 15:09:07.852199    4988 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.852237    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.860240    4988 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 15:09:07.860253    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 15:09:07.865841    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 15:09:07.865955    4988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	W0731 15:09:07.866200    4988 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 15:09:07.866300    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.892774    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 15:09:07.892815    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 15:09:07.892840    4988 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 15:09:07.892840    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 15:09:07.892857    4988 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.892903    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.906674    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 15:09:07.906790    4988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:09:07.908277    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 15:09:07.908290    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0731 15:09:07.931448    4988 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 15:09:07.931560    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.963756    4988 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 15:09:07.963785    4988 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.963850    4988 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.996054    4988 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:09:07.996076    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 15:09:08.004657    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 15:09:08.004782    4988 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:09:08.113773    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 15:09:08.113780    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 15:09:08.113804    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 15:09:08.184126    4988 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:09:08.184140    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 15:09:08.513344    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 15:09:08.513365    4988 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 15:09:08.513373    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 15:09:08.663105    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 15:09:08.663151    4988 cache_images.go:92] duration metric: took 1.335348792s to LoadCachedImages
	W0731 15:09:08.663202    4988 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0731 15:09:08.663207    4988 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 15:09:08.663264    4988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-609000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 15:09:08.663335    4988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 15:09:08.676668    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:09:08.676681    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:09:08.676687    4988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 15:09:08.676694    4988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-609000 NodeName:stopped-upgrade-609000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 15:09:08.676760    4988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-609000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 15:09:08.676817    4988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 15:09:08.680016    4988 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 15:09:08.680050    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 15:09:08.682476    4988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 15:09:08.687204    4988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 15:09:08.691738    4988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 15:09:08.696737    4988 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 15:09:08.697966    4988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 15:09:08.701735    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:08.766689    4988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:09:08.776414    4988 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000 for IP: 10.0.2.15
	I0731 15:09:08.776425    4988 certs.go:194] generating shared ca certs ...
	I0731 15:09:08.776435    4988 certs.go:226] acquiring lock for ca certs: {Name:mk0bfd7451d2ce366c95ee7ce2af2fa5265e7335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.776608    4988 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key
	I0731 15:09:08.776647    4988 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key
	I0731 15:09:08.776653    4988 certs.go:256] generating profile certs ...
	I0731 15:09:08.776711    4988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key
	I0731 15:09:08.776732    4988 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf
	I0731 15:09:08.776743    4988 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 15:09:08.835581    4988 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf ...
	I0731 15:09:08.835597    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf: {Name:mkfdb7af116406fb5ca43546504716c0cea15846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.836504    4988 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf ...
	I0731 15:09:08.836509    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf: {Name:mk2c7609c59a21189518168a8dd8ebaba6a7ef28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.836677    4988 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt
	I0731 15:09:08.836808    4988 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key
	I0731 15:09:08.836936    4988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.key
	I0731 15:09:08.837062    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem (1338 bytes)
	W0731 15:09:08.837091    4988 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913_empty.pem, impossibly tiny 0 bytes
	I0731 15:09:08.837095    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 15:09:08.837114    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem (1078 bytes)
	I0731 15:09:08.837132    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem (1123 bytes)
	I0731 15:09:08.837152    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem (1679 bytes)
	I0731 15:09:08.837191    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:09:08.837556    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 15:09:08.845031    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 15:09:08.851567    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 15:09:08.858140    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 15:09:08.865395    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 15:09:08.872584    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 15:09:08.879591    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 15:09:08.886059    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 15:09:08.893194    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 15:09:08.900054    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem --> /usr/share/ca-certificates/1913.pem (1338 bytes)
	I0731 15:09:08.906493    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /usr/share/ca-certificates/19132.pem (1708 bytes)
	I0731 15:09:08.913477    4988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 15:09:08.918778    4988 ssh_runner.go:195] Run: openssl version
	I0731 15:09:08.920688    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 15:09:08.923588    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.924950    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.924975    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.926938    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 15:09:08.930025    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1913.pem && ln -fs /usr/share/ca-certificates/1913.pem /etc/ssl/certs/1913.pem"
	I0731 15:09:08.933396    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.934927    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:34 /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.934950    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.936727    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1913.pem /etc/ssl/certs/51391683.0"
	I0731 15:09:08.939654    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19132.pem && ln -fs /usr/share/ca-certificates/19132.pem /etc/ssl/certs/19132.pem"
	I0731 15:09:08.942489    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.943961    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:34 /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.943979    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.945762    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 15:09:08.949568    4988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 15:09:08.951092    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 15:09:08.954771    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 15:09:08.956473    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 15:09:08.958346    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 15:09:08.960206    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 15:09:08.962032    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 15:09:08.963929    4988 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:09:08.963994    4988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:09:08.974001    4988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 15:09:08.977219    4988 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 15:09:08.977226    4988 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 15:09:08.977249    4988 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 15:09:08.980118    4988 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:09:08.980387    4988 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-609000" does not appear in /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:09:08.980484    4988 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1411/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-609000" cluster setting kubeconfig missing "stopped-upgrade-609000" context setting]
	I0731 15:09:08.980685    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.982742    4988 kapi.go:59] client config for stopped-upgrade-609000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101950700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:09:08.983036    4988 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 15:09:08.985682    4988 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-609000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 15:09:08.985689    4988 kubeadm.go:1160] stopping kube-system containers ...
	I0731 15:09:08.985729    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:09:08.996301    4988 docker.go:483] Stopping containers: [8bb0ebee54c4 e30a2b1ee885 6ae13f04c4cd fe739fbe2f95 c66065d4d5ac a278d566ee4c 3d36dc6afdf3 b869ebda42e1 738225ad0b68]
	I0731 15:09:08.996360    4988 ssh_runner.go:195] Run: docker stop 8bb0ebee54c4 e30a2b1ee885 6ae13f04c4cd fe739fbe2f95 c66065d4d5ac a278d566ee4c 3d36dc6afdf3 b869ebda42e1 738225ad0b68
	I0731 15:09:09.011269    4988 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 15:09:09.016737    4988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:09:09.019553    4988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:09:09.019559    4988 kubeadm.go:157] found existing configuration files:
	
	I0731 15:09:09.019580    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0731 15:09:09.021958    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:09:09.021980    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:09:09.024930    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0731 15:09:09.027536    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:09:09.027557    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:09:09.029942    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0731 15:09:09.032982    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:09:09.033007    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:09:09.035528    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0731 15:09:09.037788    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:09:09.037807    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:09:09.040758    4988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:09:09.043656    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.066093    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.528979    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.642206    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:12.016429    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:12.016662    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:12.038320    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:12.038427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:12.054559    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:12.054645    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:12.066897    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:12.066970    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:12.084164    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:12.084241    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:12.095077    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:12.095144    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:12.105744    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:12.105809    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:12.116081    4804 logs.go:276] 0 containers: []
	W0731 15:09:12.116092    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:12.116160    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:12.126596    4804 logs.go:276] 0 containers: []
	W0731 15:09:12.126609    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:12.126617    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:12.126621    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:12.167812    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:12.167823    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:12.171923    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:12.171933    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:12.192011    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:12.192022    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:12.211269    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:12.211280    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:12.222998    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:12.223016    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:12.238735    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:12.238747    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:12.252818    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:12.252827    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:12.264141    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:12.264150    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:12.278920    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:12.278933    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:12.290365    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:12.290379    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:12.302248    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:12.302262    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:12.338247    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:12.338259    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:12.356409    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:12.356420    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:12.371063    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:12.371072    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:09.671681    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.708622    4988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:09:09.708704    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.210898    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.710807    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.715440    4988 api_server.go:72] duration metric: took 1.006836375s to wait for apiserver process to appear ...
	I0731 15:09:10.715453    4988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:09:10.715462    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:14.894163    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:15.717540    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:15.717586    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:19.896338    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:19.896505    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:19.912406    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:19.912483    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:19.922980    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:19.923049    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:19.933975    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:19.934041    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:19.951791    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:19.951866    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:19.962375    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:19.962449    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:19.973424    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:19.973490    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:19.983468    4804 logs.go:276] 0 containers: []
	W0731 15:09:19.983479    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:19.983531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:19.993886    4804 logs.go:276] 0 containers: []
	W0731 15:09:19.993899    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:19.993907    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:19.993913    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:20.030468    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:20.030479    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:20.044425    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:20.044439    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:20.064048    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:20.064060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:20.078463    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:20.078473    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:20.094198    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:20.094210    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:20.112600    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:20.112613    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:20.153664    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:20.153672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:20.165026    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:20.165038    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:20.175974    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:20.175985    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:20.198665    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:20.198678    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:20.209946    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:20.209962    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:20.214737    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:20.214746    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:20.228985    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:20.228997    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:20.242399    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:20.242413    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:22.756137    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:20.717882    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:20.717905    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:27.758355    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:27.758497    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:09:27.769354    4804 logs.go:276] 2 containers: [096fd66a21ed 70c9561862f0]
	I0731 15:09:27.769427    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:09:27.781688    4804 logs.go:276] 2 containers: [84fd5a1f29ca e7a46ccd2d88]
	I0731 15:09:27.781753    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:09:27.791693    4804 logs.go:276] 1 containers: [89c4e0542ee0]
	I0731 15:09:27.791758    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:09:27.805189    4804 logs.go:276] 2 containers: [3423327d9697 d4309a5fa412]
	I0731 15:09:27.805267    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:09:27.817672    4804 logs.go:276] 1 containers: [c9cafce3becc]
	I0731 15:09:27.817747    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:09:27.828984    4804 logs.go:276] 2 containers: [5271c382d5b3 010ea24cdd43]
	I0731 15:09:27.829051    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:09:27.856284    4804 logs.go:276] 0 containers: []
	W0731 15:09:27.856297    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:09:27.856361    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:09:27.866792    4804 logs.go:276] 0 containers: []
	W0731 15:09:27.866804    4804 logs.go:278] No container was found matching "storage-provisioner"
	I0731 15:09:27.866811    4804 logs.go:123] Gathering logs for etcd [84fd5a1f29ca] ...
	I0731 15:09:27.866817    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84fd5a1f29ca"
	I0731 15:09:27.885128    4804 logs.go:123] Gathering logs for kube-scheduler [3423327d9697] ...
	I0731 15:09:27.885138    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3423327d9697"
	I0731 15:09:27.898523    4804 logs.go:123] Gathering logs for kube-proxy [c9cafce3becc] ...
	I0731 15:09:27.898532    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9cafce3becc"
	I0731 15:09:27.909943    4804 logs.go:123] Gathering logs for kube-controller-manager [5271c382d5b3] ...
	I0731 15:09:27.909959    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5271c382d5b3"
	I0731 15:09:27.928990    4804 logs.go:123] Gathering logs for kube-controller-manager [010ea24cdd43] ...
	I0731 15:09:27.929001    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 010ea24cdd43"
	I0731 15:09:27.940226    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:09:27.940238    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:09:27.962540    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:09:27.962547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:09:28.001568    4804 logs.go:123] Gathering logs for etcd [e7a46ccd2d88] ...
	I0731 15:09:28.001580    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a46ccd2d88"
	I0731 15:09:28.016463    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:09:28.016474    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:09:28.028247    4804 logs.go:123] Gathering logs for coredns [89c4e0542ee0] ...
	I0731 15:09:28.028256    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c4e0542ee0"
	I0731 15:09:28.039924    4804 logs.go:123] Gathering logs for kube-scheduler [d4309a5fa412] ...
	I0731 15:09:28.039936    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4309a5fa412"
	I0731 15:09:28.054806    4804 logs.go:123] Gathering logs for kube-apiserver [096fd66a21ed] ...
	I0731 15:09:28.054816    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 096fd66a21ed"
	I0731 15:09:28.069551    4804 logs.go:123] Gathering logs for kube-apiserver [70c9561862f0] ...
	I0731 15:09:28.069562    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70c9561862f0"
	I0731 15:09:28.089246    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:09:28.089255    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:09:28.093901    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:09:28.093909    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:09:25.718245    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:25.718313    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:30.631112    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:30.719048    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:30.719092    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:35.633499    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:35.633675    4804 kubeadm.go:597] duration metric: took 4m3.797122s to restartPrimaryControlPlane
	W0731 15:09:35.633801    4804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 15:09:35.633851    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 15:09:36.594893    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 15:09:36.600099    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:09:36.602884    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:09:36.605730    4804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:09:36.605737    4804 kubeadm.go:157] found existing configuration files:
	
	I0731 15:09:36.605758    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I0731 15:09:36.608138    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:09:36.608163    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:09:36.610762    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I0731 15:09:36.613881    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:09:36.613904    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:09:36.616393    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I0731 15:09:36.619019    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:09:36.619046    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:09:36.622158    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I0731 15:09:36.624615    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:09:36.624636    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:09:36.627294    4804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 15:09:36.645229    4804 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 15:09:36.645260    4804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 15:09:36.691357    4804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 15:09:36.691417    4804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 15:09:36.691506    4804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 15:09:36.740888    4804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 15:09:36.745851    4804 out.go:204]   - Generating certificates and keys ...
	I0731 15:09:36.745888    4804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 15:09:36.745927    4804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 15:09:36.745973    4804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 15:09:36.746012    4804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 15:09:36.746059    4804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 15:09:36.746093    4804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 15:09:36.746129    4804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 15:09:36.746171    4804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 15:09:36.746215    4804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 15:09:36.746262    4804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 15:09:36.746287    4804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 15:09:36.746317    4804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 15:09:36.844694    4804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 15:09:36.972600    4804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 15:09:37.095955    4804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 15:09:37.173266    4804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 15:09:37.202734    4804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 15:09:37.203366    4804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 15:09:37.203392    4804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 15:09:37.280899    4804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 15:09:37.285054    4804 out.go:204]   - Booting up control plane ...
	I0731 15:09:37.285104    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 15:09:37.285146    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 15:09:37.285179    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 15:09:37.285217    4804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 15:09:37.285303    4804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 15:09:35.719738    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:35.719758    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:41.789597    4804 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505693 seconds
	I0731 15:09:41.789904    4804 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 15:09:41.797581    4804 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 15:09:42.310869    4804 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 15:09:42.310985    4804 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-683000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 15:09:42.816152    4804 kubeadm.go:310] [bootstrap-token] Using token: svmwkp.h1lf5uy1wworw3a0
	I0731 15:09:42.822438    4804 out.go:204]   - Configuring RBAC rules ...
	I0731 15:09:42.822506    4804 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 15:09:42.822554    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 15:09:42.824612    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 15:09:42.826131    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 15:09:42.826804    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 15:09:42.827675    4804 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 15:09:42.830783    4804 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 15:09:42.999322    4804 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 15:09:43.221071    4804 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 15:09:43.221539    4804 kubeadm.go:310] 
	I0731 15:09:43.221571    4804 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 15:09:43.221577    4804 kubeadm.go:310] 
	I0731 15:09:43.221612    4804 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 15:09:43.221616    4804 kubeadm.go:310] 
	I0731 15:09:43.221630    4804 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 15:09:43.221660    4804 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 15:09:43.221688    4804 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 15:09:43.221692    4804 kubeadm.go:310] 
	I0731 15:09:43.221720    4804 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 15:09:43.221723    4804 kubeadm.go:310] 
	I0731 15:09:43.221757    4804 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 15:09:43.221763    4804 kubeadm.go:310] 
	I0731 15:09:43.221803    4804 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 15:09:43.221846    4804 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 15:09:43.221888    4804 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 15:09:43.221893    4804 kubeadm.go:310] 
	I0731 15:09:43.221933    4804 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 15:09:43.222034    4804 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 15:09:43.222057    4804 kubeadm.go:310] 
	I0731 15:09:43.222102    4804 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token svmwkp.h1lf5uy1wworw3a0 \
	I0731 15:09:43.222154    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a \
	I0731 15:09:43.222165    4804 kubeadm.go:310] 	--control-plane 
	I0731 15:09:43.222192    4804 kubeadm.go:310] 
	I0731 15:09:43.222256    4804 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 15:09:43.222262    4804 kubeadm.go:310] 
	I0731 15:09:43.222321    4804 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token svmwkp.h1lf5uy1wworw3a0 \
	I0731 15:09:43.222390    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a 
	I0731 15:09:43.222442    4804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 15:09:43.222450    4804 cni.go:84] Creating CNI manager for ""
	I0731 15:09:43.222458    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:09:43.228835    4804 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 15:09:43.235976    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 15:09:43.238909    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 15:09:43.243880    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 15:09:43.243934    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 15:09:43.243943    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-683000 minikube.k8s.io/updated_at=2024_07_31T15_09_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=running-upgrade-683000 minikube.k8s.io/primary=true
	I0731 15:09:43.284117    4804 kubeadm.go:1113] duration metric: took 40.223ms to wait for elevateKubeSystemPrivileges
	I0731 15:09:43.292582    4804 ops.go:34] apiserver oom_adj: -16
	I0731 15:09:43.292717    4804 kubeadm.go:394] duration metric: took 4m11.469595833s to StartCluster
	I0731 15:09:43.292730    4804 settings.go:142] acquiring lock: {Name:mk4ba9457258541473c3bcf6c2e4b75027bd146e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:43.292816    4804 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:09:43.293214    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:43.293394    4804 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:09:43.293401    4804 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 15:09:43.293440    4804 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-683000"
	I0731 15:09:43.293488    4804 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-683000"
	W0731 15:09:43.293492    4804 addons.go:243] addon storage-provisioner should already be in state true
	I0731 15:09:43.293487    4804 config.go:182] Loaded profile config "running-upgrade-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:09:43.293444    4804 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-683000"
	I0731 15:09:43.293512    4804 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-683000"
	I0731 15:09:43.293503    4804 host.go:66] Checking if "running-upgrade-683000" exists ...
	I0731 15:09:43.294382    4804 kapi.go:59] client config for running-upgrade-683000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/running-upgrade-683000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1024fc700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:09:43.294499    4804 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-683000"
	W0731 15:09:43.294503    4804 addons.go:243] addon default-storageclass should already be in state true
	I0731 15:09:43.294509    4804 host.go:66] Checking if "running-upgrade-683000" exists ...
	I0731 15:09:43.297942    4804 out.go:177] * Verifying Kubernetes components...
	I0731 15:09:43.298287    4804 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 15:09:43.302126    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 15:09:43.302134    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:09:43.304896    4804 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:43.307852    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:43.311915    4804 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:09:43.311921    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 15:09:43.311928    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50272 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/running-upgrade-683000/id_rsa Username:docker}
	I0731 15:09:43.402886    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:09:43.408774    4804 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:09:43.408827    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:43.412901    4804 api_server.go:72] duration metric: took 119.49775ms to wait for apiserver process to appear ...
	I0731 15:09:43.412911    4804 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:09:43.412920    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:43.455760    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 15:09:43.467709    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:09:40.720569    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:40.720631    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:48.414968    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:48.415010    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:45.721880    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:45.721924    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:53.415314    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:53.415372    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:50.723499    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:50.723542    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:58.415659    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:58.415692    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:55.725469    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:55.725499    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:03.416061    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:03.416105    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:00.727646    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:00.727693    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:08.416651    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:08.416703    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:05.729930    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:05.729985    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:13.417434    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:13.417456    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 15:10:13.795396    4804 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 15:10:13.800253    4804 out.go:177] * Enabled addons: storage-provisioner
	I0731 15:10:13.807126    4804 addons.go:510] duration metric: took 30.514217042s for enable addons: enabled=[storage-provisioner]
	I0731 15:10:10.732365    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:10.732667    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:10.759871    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:10.759986    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:10.776462    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:10.776554    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:10.789959    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:10.790033    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:10.801807    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:10.801876    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:10.815898    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:10.815961    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:10.826298    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:10.826361    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:10.836620    4988 logs.go:276] 0 containers: []
	W0731 15:10:10.836632    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:10.836687    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:10.850864    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:10.850880    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:10.850887    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:10.855552    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:10.855559    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:10.869482    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:10.869492    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:10.899863    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:10.899874    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:10.911884    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:10.911895    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:10.925828    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:10.925839    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:10.940554    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:10.940565    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:10.951815    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:10.951826    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:10.963065    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:10.963077    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:11.000615    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:11.000625    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:11.105101    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:11.105115    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:11.116919    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:11.116940    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:11.129018    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:11.129030    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:11.174096    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:11.174108    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:11.192236    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:11.192247    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:11.206252    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:11.206263    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:13.730748    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:18.417948    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:18.417975    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:18.732114    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:18.732267    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:18.749921    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:18.750005    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:18.763944    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:18.764026    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:18.774862    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:18.774933    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:18.785459    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:18.785534    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:18.795715    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:18.795786    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:18.806283    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:18.806347    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:18.819134    4988 logs.go:276] 0 containers: []
	W0731 15:10:18.819144    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:18.819206    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:18.828632    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:18.828648    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:18.828653    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:18.867549    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:18.867561    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:18.880545    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:18.880560    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:18.894409    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:18.894421    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:18.905873    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:18.905884    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:18.931523    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:18.931532    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:18.970008    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:18.970020    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:18.974480    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:18.974491    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:18.988385    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:18.988395    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:19.022115    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:19.022130    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:19.033906    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:19.033918    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:19.045944    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:19.045956    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:19.084509    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:19.084520    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:19.099030    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:19.099045    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:19.114387    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:19.114398    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:19.126214    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:19.126226    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:23.419067    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:23.419114    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:21.647087    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:28.420525    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:28.420576    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:26.649327    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:26.649535    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:26.662865    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:26.662941    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:26.673904    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:26.673981    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:26.684825    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:26.684902    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:26.695131    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:26.695207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:26.705192    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:26.705255    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:26.715216    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:26.715281    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:26.725503    4988 logs.go:276] 0 containers: []
	W0731 15:10:26.725518    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:26.725573    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:26.735925    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:26.735942    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:26.735947    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:26.747830    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:26.747840    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:26.767471    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:26.767482    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:26.806513    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:26.806523    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:26.818337    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:26.818347    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:26.839723    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:26.839738    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:26.864106    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:26.864116    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:26.876368    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:26.876380    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:26.888723    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:26.888733    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:26.906393    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:26.906404    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:26.921065    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:26.921075    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:26.960420    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:26.960430    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:26.965459    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:26.965465    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:27.001817    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:27.001827    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:27.016580    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:27.016590    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:27.031985    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:27.031995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:29.543453    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:33.421324    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:33.421376    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:34.545737    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:34.545921    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:34.565001    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:34.565095    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:34.579891    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:34.579966    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:34.591591    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:34.591658    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:34.602513    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:34.602581    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:34.613124    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:34.613192    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:34.624227    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:34.624296    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:34.634271    4988 logs.go:276] 0 containers: []
	W0731 15:10:34.634283    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:34.634335    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:34.644334    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:34.644349    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:34.644354    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:38.423381    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:38.423425    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:34.666405    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:34.666415    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:34.677706    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:34.677716    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:34.692715    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:34.692726    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:34.704704    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:34.704718    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:34.742451    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:34.742466    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:34.754135    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:34.754147    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:34.771214    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:34.771223    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:34.775276    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:34.775282    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:34.811888    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:34.811903    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:34.824041    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:34.824051    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:34.837659    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:34.837670    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:34.861742    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:34.861755    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:34.898119    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:34.898133    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:34.911842    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:34.911853    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:34.932471    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:34.932494    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:37.446499    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:43.425637    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:43.425725    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:43.437260    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:43.437352    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:43.448013    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:43.448077    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:43.462503    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:43.462578    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:43.477758    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:43.477824    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:43.488946    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:43.489018    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:43.499064    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:43.499125    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:43.509929    4804 logs.go:276] 0 containers: []
	W0731 15:10:43.509940    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:43.509999    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:43.520505    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:43.520520    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:43.520526    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:43.532202    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:43.532212    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:43.544085    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:43.544099    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:43.554868    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:43.554883    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:43.579836    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:43.579843    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:43.614683    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:43.614690    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:43.618844    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:43.618849    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:10:43.632349    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:43.632365    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:43.644072    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:43.644082    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:43.655593    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:43.655607    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:43.691235    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:43.691250    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:43.709560    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:43.709570    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:43.725909    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:43.725925    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:42.448843    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:42.449200    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:42.479141    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:42.479266    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:42.498103    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:42.498197    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:42.512771    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:42.512853    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:42.525194    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:42.525271    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:42.535966    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:42.536037    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:42.546592    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:42.546665    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:42.561735    4988 logs.go:276] 0 containers: []
	W0731 15:10:42.561746    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:42.561813    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:42.572268    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:42.572286    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:42.572292    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:42.584613    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:42.584624    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:42.588984    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:42.588990    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:42.602725    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:42.602735    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:42.614026    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:42.614042    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:42.636154    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:42.636163    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:42.659364    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:42.659371    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:42.695372    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:42.695380    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:42.730251    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:42.730263    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:42.744622    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:42.744636    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:42.761883    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:42.761894    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:42.776133    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:42.776145    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:42.787416    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:42.787429    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:42.799476    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:42.799489    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:42.814066    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:42.814077    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:42.852957    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:42.852977    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:46.248297    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:45.366881    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:51.250861    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:51.251040    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:51.266868    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:51.266955    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:51.287523    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:51.287592    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:51.298072    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:51.298139    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:51.308569    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:51.308645    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:51.319983    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:51.320053    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:51.331266    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:51.331331    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:51.341309    4804 logs.go:276] 0 containers: []
	W0731 15:10:51.341319    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:51.341378    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:51.352101    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:51.352121    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:51.352126    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:51.366800    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:51.366810    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:51.381008    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:51.381019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:51.393205    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:51.393216    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:51.417288    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:51.417306    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:51.451921    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:51.451930    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:51.456140    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:51.456147    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:10:51.470024    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:51.470033    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:51.482131    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:51.482142    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:51.495606    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:51.495616    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:51.537226    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:51.537237    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:51.551408    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:51.551419    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:51.563671    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:51.563682    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:50.369073    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:50.369319    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:50.393787    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:50.393907    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:50.412688    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:50.412781    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:50.424490    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:50.424555    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:50.435418    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:50.435488    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:50.445696    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:50.445769    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:50.456023    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:50.456086    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:50.466132    4988 logs.go:276] 0 containers: []
	W0731 15:10:50.466144    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:50.466207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:50.477714    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:50.477736    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:50.477742    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:50.516940    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:50.516959    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:50.556647    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:50.556663    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:50.571191    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:50.571203    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:50.582854    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:50.582869    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:50.607172    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:50.607178    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:50.620750    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:50.620761    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:50.631903    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:50.631913    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:50.635959    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:50.635965    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:50.670982    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:50.670992    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:50.685034    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:50.685049    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:50.702999    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:50.703009    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:50.718961    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:50.718972    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:50.730072    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:50.730086    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:50.742801    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:50.742810    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:50.764893    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:50.764905    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:53.279091    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:54.083723    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:58.281420    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:58.281766    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:58.312867    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:58.313003    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:58.332055    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:58.332155    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:58.346109    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:58.346194    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:58.360450    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:58.360521    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:58.371185    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:58.371258    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:58.385859    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:58.385923    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:58.396393    4988 logs.go:276] 0 containers: []
	W0731 15:10:58.396407    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:58.396463    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:58.406829    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:58.406847    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:58.406854    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:58.429532    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:58.429543    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:58.441313    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:58.441324    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:58.480285    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:58.480300    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:58.494993    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:58.495007    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:58.509985    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:58.509995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:58.521567    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:58.521580    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:58.563540    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:58.563551    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:58.574703    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:58.574714    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:58.599499    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:58.599512    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:58.617163    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:58.617174    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:58.630523    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:58.630537    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:58.642546    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:58.642556    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:58.679372    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:58.679381    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:58.683269    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:58.683278    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:58.695736    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:58.695746    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:59.086051    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:59.086187    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:59.096954    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:10:59.097032    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:59.107252    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:10:59.107326    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:59.117516    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:10:59.117579    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:59.128291    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:10:59.128363    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:59.138768    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:10:59.138842    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:59.149097    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:10:59.149172    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:59.158980    4804 logs.go:276] 0 containers: []
	W0731 15:10:59.158990    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:59.159051    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:59.170626    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:10:59.170642    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:10:59.170647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:10:59.184842    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:10:59.184850    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:10:59.197290    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:10:59.197300    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:10:59.212445    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:10:59.212455    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:10:59.233162    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:59.233174    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:59.258318    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:10:59.258326    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:59.269558    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:59.269569    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:59.307101    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:59.307110    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:59.311510    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:10:59.311520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:10:59.323416    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:10:59.323427    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:10:59.335112    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:10:59.335125    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:10:59.352766    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:59.352777    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:59.388623    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:10:59.388634    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:01.902775    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:01.220227    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:06.905043    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:06.905182    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:06.918677    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:06.918746    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:06.929634    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:06.929704    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:06.939717    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:06.939787    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:06.952326    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:06.952394    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:06.962759    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:06.962821    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:06.973359    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:06.973426    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:06.983866    4804 logs.go:276] 0 containers: []
	W0731 15:11:06.983878    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:06.983937    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:06.998078    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:06.998093    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:06.998098    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:07.033630    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:07.033638    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:07.048132    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:07.048142    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:07.062078    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:07.062091    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:07.084293    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:07.084303    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:07.098819    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:07.098832    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:07.116553    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:07.116561    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:07.121538    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:07.121547    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:07.156040    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:07.156051    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:07.167115    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:07.167124    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:07.178539    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:07.178549    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:07.189947    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:07.189958    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:07.214706    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:07.214714    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:06.222068    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:06.222460    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:06.250862    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:06.250994    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:06.269489    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:06.269578    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:06.282640    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:06.282715    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:06.294528    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:06.294598    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:06.304885    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:06.304957    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:06.318258    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:06.318326    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:06.329005    4988 logs.go:276] 0 containers: []
	W0731 15:11:06.329020    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:06.329085    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:06.339223    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:06.339240    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:06.339246    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:06.351120    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:06.351134    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:06.368802    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:06.368823    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:06.380178    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:06.380190    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:06.418197    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:06.418206    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:06.432691    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:06.432703    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:06.456755    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:06.456763    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:06.468673    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:06.468685    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:06.505458    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:06.505468    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:06.543595    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:06.543606    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:06.555769    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:06.555784    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:06.567121    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:06.567132    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:06.581105    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:06.581116    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:06.595128    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:06.595140    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:06.608639    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:06.608649    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:06.629918    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:06.629931    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:09.136311    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:09.732339    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:14.138680    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:14.138891    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:14.160241    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:14.160340    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:14.175500    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:14.175575    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:14.188509    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:14.188591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:14.199148    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:14.199218    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:14.209245    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:14.209309    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:14.220661    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:14.220735    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:14.231324    4988 logs.go:276] 0 containers: []
	W0731 15:11:14.231335    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:14.231393    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:14.241957    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:14.241976    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:14.241982    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:14.256324    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:14.256338    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:14.273127    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:14.273136    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:14.296294    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:14.296303    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:14.308450    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:14.308463    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:14.322719    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:14.322731    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:14.367630    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:14.367643    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:14.384318    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:14.384334    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:14.401465    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:14.401478    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:14.406188    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:14.406194    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:14.418294    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:14.418305    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:14.439504    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:14.439516    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:14.452570    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:14.452581    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:14.492166    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:14.492176    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:14.510486    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:14.510502    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:14.523491    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:14.523500    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:14.734102    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:14.734279    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:14.750256    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:14.750338    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:14.762475    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:14.762556    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:14.773369    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:14.773434    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:14.783987    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:14.784057    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:14.794254    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:14.794324    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:14.804676    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:14.804743    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:14.815451    4804 logs.go:276] 0 containers: []
	W0731 15:11:14.815462    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:14.815518    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:14.826173    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:14.826189    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:14.826194    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:14.830701    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:14.830708    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:14.844680    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:14.844690    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:14.865531    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:14.865544    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:14.877581    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:14.877593    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:14.892638    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:14.892649    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:14.904277    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:14.904288    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:14.917249    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:14.917263    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:14.942794    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:14.942805    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:14.980312    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:14.980320    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:15.016640    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:15.016651    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:15.031272    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:15.031283    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:15.048658    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:15.048668    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:17.562808    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:17.064113    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:22.565047    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:22.565154    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:22.577494    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:22.577567    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:22.588055    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:22.588124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:22.598976    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:22.599044    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:22.610354    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:22.610424    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:22.621289    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:22.621352    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:22.631534    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:22.631608    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:22.641726    4804 logs.go:276] 0 containers: []
	W0731 15:11:22.641737    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:22.641796    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:22.653474    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:22.653490    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:22.653495    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:22.665609    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:22.665624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:22.682762    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:22.682770    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:22.694230    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:22.694241    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:22.717455    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:22.717462    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:22.755853    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:22.755864    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:22.760610    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:22.760616    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:22.795130    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:22.795143    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:22.808321    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:22.808337    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:22.825942    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:22.825953    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:22.838382    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:22.838397    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:22.853597    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:22.853608    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:22.868358    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:22.868370    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:22.066415    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:22.066625    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:22.081524    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:22.081612    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:22.093732    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:22.093802    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:22.104130    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:22.104199    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:22.114679    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:22.114753    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:22.125432    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:22.125506    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:22.135521    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:22.135591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:22.145376    4988 logs.go:276] 0 containers: []
	W0731 15:11:22.145387    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:22.145449    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:22.155684    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:22.155700    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:22.155708    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:22.167095    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:22.167106    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:22.188646    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:22.188661    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:22.211842    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:22.211852    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:22.245765    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:22.245779    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:22.259605    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:22.259615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:22.299493    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:22.299504    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:22.314332    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:22.314342    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:22.329025    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:22.329035    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:22.341601    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:22.341613    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:22.379807    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:22.379819    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:22.400534    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:22.400544    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:22.411870    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:22.411885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:22.428942    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:22.428953    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:22.442453    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:22.442466    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:22.446899    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:22.446907    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:25.382173    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:24.959923    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:30.384400    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:30.384558    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:30.396034    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:30.396137    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:30.406596    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:30.406667    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:30.417306    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:30.417387    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:30.427988    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:30.428054    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:30.438380    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:30.438452    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:30.448881    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:30.448957    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:30.458753    4804 logs.go:276] 0 containers: []
	W0731 15:11:30.458766    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:30.458828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:30.469858    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:30.469871    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:30.469877    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:30.474283    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:30.474289    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:30.488332    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:30.488346    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:30.502316    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:30.502325    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:30.519618    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:30.519627    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:30.531063    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:30.531073    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:30.566866    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:30.566875    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:30.607579    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:30.607593    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:30.621592    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:30.621606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:30.633653    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:30.633663    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:30.645850    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:30.645861    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:30.657975    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:30.657987    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:30.683349    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:30.683364    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:33.210710    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:29.962104    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:29.962435    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:29.976696    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:29.976769    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:29.987602    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:29.987673    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:29.998247    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:29.998323    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:30.009077    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:30.009149    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:30.018936    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:30.019024    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:30.029933    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:30.030001    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:30.040279    4988 logs.go:276] 0 containers: []
	W0731 15:11:30.040290    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:30.040345    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:30.050304    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:30.050321    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:30.050327    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:30.061923    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:30.061933    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:30.076445    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:30.076456    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:30.080394    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:30.080403    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:30.106513    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:30.106522    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:30.121002    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:30.121014    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:30.132996    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:30.133005    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:30.145343    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:30.145354    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:30.159190    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:30.159201    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:30.196819    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:30.196830    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:30.219781    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:30.219790    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:30.242859    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:30.242872    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:30.261867    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:30.261877    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:30.300286    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:30.300294    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:30.358973    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:30.358985    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:30.373515    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:30.373527    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:32.893118    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:38.212880    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:38.212967    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:38.225415    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:38.225492    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:38.236646    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:38.236717    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:38.248088    4804 logs.go:276] 2 containers: [89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:38.248156    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:38.263445    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:38.263516    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:38.274913    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:38.274991    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:38.294874    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:38.294944    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:38.306336    4804 logs.go:276] 0 containers: []
	W0731 15:11:38.306349    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:38.306405    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:38.318465    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:38.318481    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:38.318486    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:38.330834    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:38.330844    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:38.345239    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:38.345250    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:38.356788    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:38.356799    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:38.373646    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:38.373657    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:38.409132    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:38.409142    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:38.444079    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:38.444091    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:38.458577    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:38.458589    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:38.470040    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:38.470050    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:38.481522    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:38.481532    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:38.492793    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:38.492804    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:38.497649    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:38.497655    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:38.511948    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:38.511958    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:37.895338    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:37.895481    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:37.913063    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:37.913147    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:37.924263    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:37.924338    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:37.934770    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:37.934835    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:37.945474    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:37.945544    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:37.956120    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:37.956192    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:37.966965    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:37.967035    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:37.977480    4988 logs.go:276] 0 containers: []
	W0731 15:11:37.977490    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:37.977550    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:37.987859    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:37.987879    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:37.987885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:38.002243    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:38.002256    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:38.019264    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:38.019274    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:38.033547    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:38.033558    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:38.038433    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:38.038440    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:38.052831    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:38.052842    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:38.074282    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:38.074295    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:38.085938    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:38.085950    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:38.097402    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:38.097415    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:38.131874    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:38.131885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:38.170731    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:38.170742    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:38.182085    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:38.182096    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:38.194375    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:38.194387    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:38.219262    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:38.219275    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:38.259802    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:38.259829    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:38.275837    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:38.275846    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:41.035603    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:40.790504    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:46.037828    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:46.037913    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:46.049161    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:46.049236    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:46.060191    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:46.060260    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:46.071516    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:46.071596    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:46.083088    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:46.083163    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:46.094432    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:46.094508    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:46.105378    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:46.105446    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:46.116010    4804 logs.go:276] 0 containers: []
	W0731 15:11:46.116023    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:46.116087    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:46.127751    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:46.127768    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:46.127774    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:46.133267    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:46.133277    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:46.146048    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:46.146060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:46.160632    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:46.160642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:46.175432    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:46.175441    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:46.188555    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:46.188569    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:46.225717    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:11:46.225730    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:11:46.237091    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:46.237102    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:46.249009    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:46.249019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:46.274951    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:46.274961    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:46.292011    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:46.292025    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:46.317409    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:46.317419    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:46.330215    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:46.330226    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:46.365613    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:46.365624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:46.384618    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:11:46.384628    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:11:48.897779    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:45.792731    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:45.792862    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:45.807324    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:45.807405    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:45.819882    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:45.819952    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:45.830252    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:45.830317    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:45.841157    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:45.841229    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:45.851993    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:45.852061    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:45.862649    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:45.862716    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:45.873003    4988 logs.go:276] 0 containers: []
	W0731 15:11:45.873015    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:45.873065    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:45.885923    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:45.885940    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:45.885946    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:45.922688    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:45.922697    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:45.944555    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:45.944565    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:45.969607    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:45.969619    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:45.984052    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:45.984066    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:45.988352    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:45.988359    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:46.025700    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:46.025711    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:46.046288    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:46.046305    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:46.059384    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:46.059397    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:46.078528    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:46.078551    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:46.117942    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:46.117960    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:46.131097    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:46.131108    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:46.146738    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:46.146746    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:46.161529    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:46.161538    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:46.173938    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:46.173949    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:46.188664    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:46.188676    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:48.703386    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:53.899951    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:53.900048    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:53.911522    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:11:53.911596    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:53.922872    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:11:53.922950    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:53.939678    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:11:53.939753    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:53.951055    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:11:53.951133    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:53.963131    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:11:53.963209    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:53.705756    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:53.705971    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:53.723631    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:53.723719    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:53.737157    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:53.737230    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:53.750464    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:53.750540    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:53.761581    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:53.761655    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:53.771774    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:53.771843    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:53.784782    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:53.784856    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:53.795238    4988 logs.go:276] 0 containers: []
	W0731 15:11:53.795251    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:53.795307    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:53.805653    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:53.805671    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:53.805677    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:53.822788    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:53.822801    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:53.838190    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:53.838202    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:53.878262    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:53.878279    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:53.893115    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:53.893126    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:53.915357    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:53.915371    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:53.928356    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:53.928372    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:53.949839    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:53.949857    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:53.964763    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:53.964772    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:53.976900    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:53.976909    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:54.001635    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:54.001649    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:54.038921    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:54.038934    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:54.053842    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:54.053855    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:54.067204    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:54.067216    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:54.071988    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:54.072000    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:54.122605    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:54.122615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:53.975270    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:11:53.975340    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:53.985908    4804 logs.go:276] 0 containers: []
	W0731 15:11:53.985919    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:53.985979    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:53.997006    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:11:53.997026    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:53.997032    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:54.034441    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:11:54.034456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:11:54.056363    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:11:54.056373    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:11:54.069234    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:54.069244    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:54.095687    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:11:54.095704    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:54.110584    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:11:54.110594    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:11:54.122309    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:11:54.122323    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:11:54.137131    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:11:54.137142    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:11:54.149331    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:11:54.149343    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:11:54.164205    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:11:54.164219    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:11:54.175972    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:11:54.175982    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:11:54.193522    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:11:54.193538    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:11:54.208781    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:54.208791    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:54.247511    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:54.247520    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:54.251673    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:11:54.251680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:11:56.769824    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:56.637506    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:01.770961    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:01.771046    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:01.782518    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:01.782591    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:01.793694    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:01.793769    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:01.805650    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:01.805725    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:01.816737    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:01.816802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:01.828595    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:01.828668    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:01.840534    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:01.840603    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:01.853051    4804 logs.go:276] 0 containers: []
	W0731 15:12:01.853063    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:01.853124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:01.873095    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:01.873112    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:01.873118    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:01.889260    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:01.889272    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:01.901727    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:01.901739    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:01.917275    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:01.917287    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:01.933338    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:01.933346    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:01.946394    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:01.946404    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:01.950937    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:01.950952    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:01.964531    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:01.964540    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:02.001230    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:02.001244    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:02.015507    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:02.015518    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:02.028520    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:02.028534    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:02.043204    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:02.043215    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:02.065225    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:02.065235    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:02.078125    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:02.078137    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:02.103051    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:02.103070    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:01.639819    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:01.640090    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:01.667647    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:01.667776    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:01.685252    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:01.685332    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:01.698718    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:01.698788    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:01.713888    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:01.713960    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:01.729226    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:01.729302    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:01.739637    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:01.739699    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:01.751155    4988 logs.go:276] 0 containers: []
	W0731 15:12:01.751166    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:01.751221    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:01.761382    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:01.761398    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:01.761403    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:01.785769    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:01.785780    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:01.826843    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:01.826858    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:01.841982    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:01.841990    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:01.865742    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:01.865755    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:01.891204    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:01.891214    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:01.903897    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:01.903908    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:01.919307    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:01.919320    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:01.931630    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:01.931642    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:01.944115    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:01.944128    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:01.963324    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:01.963338    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:01.967999    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:01.968011    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:02.011794    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:02.011808    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:02.054048    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:02.054067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:02.073232    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:02.073245    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:02.095665    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:02.095675    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:04.609651    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:04.641305    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:09.612192    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:09.612470    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:09.637474    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:09.637591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:09.643678    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:09.643907    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:09.659346    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:09.659434    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:09.672233    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:09.672314    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:09.684040    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:09.684124    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:09.695892    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:09.695971    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:09.707497    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:09.707575    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:09.719151    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:09.719225    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:09.729989    4804 logs.go:276] 0 containers: []
	W0731 15:12:09.730001    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:09.730070    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:09.741445    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:09.741480    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:09.741489    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:09.767134    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:09.767154    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:09.782486    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:09.782503    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:09.800651    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:09.800663    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:09.813476    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:09.813487    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:09.826183    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:09.826194    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:09.838317    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:09.838330    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:09.853733    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:09.853746    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:09.867336    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:09.867350    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:09.880764    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:09.880776    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:09.894579    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:09.894590    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:09.913019    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:09.913037    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:09.964124    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:09.964139    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:09.969050    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:09.969062    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:09.985629    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:09.985659    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:12.527880    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:09.654083    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:09.654167    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:09.667804    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:09.667888    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:09.680476    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:09.680556    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:09.700853    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:09.700921    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:09.713225    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:09.713303    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:09.731618    4988 logs.go:276] 0 containers: []
	W0731 15:12:09.731627    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:09.731678    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:09.742818    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:09.742834    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:09.742839    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:09.773053    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:09.773065    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:09.785229    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:09.785240    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:09.810410    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:09.810429    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:09.848512    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:09.848525    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:09.863593    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:09.863604    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:09.904337    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:09.904359    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:09.922148    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:09.922161    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:09.937571    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:09.937585    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:09.964607    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:09.964616    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:09.969145    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:09.969154    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:09.981720    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:09.981734    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:09.999232    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:09.999243    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:10.038567    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:10.038576    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:10.053168    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:10.053178    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:10.074226    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:10.074237    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:12.589488    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:17.528983    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:17.529213    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:17.547451    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:17.547546    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:17.561278    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:17.561349    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:17.573826    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:17.573905    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:17.584244    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:17.584310    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:17.594370    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:17.594435    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:17.606020    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:17.606095    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:17.616467    4804 logs.go:276] 0 containers: []
	W0731 15:12:17.616479    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:17.616538    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:17.627839    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:17.627855    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:17.627861    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:17.633171    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:17.633180    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:17.647186    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:17.647198    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:17.660313    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:17.660325    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:17.674029    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:17.674043    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:17.686872    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:17.686883    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:17.705837    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:17.705850    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:17.720690    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:17.720702    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:17.757504    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:17.757523    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:17.772504    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:17.772520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:17.785447    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:17.785459    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:17.822389    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:17.822399    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:17.838321    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:17.838330    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:17.854374    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:17.854385    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:17.880288    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:17.880308    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:17.591748    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:17.591830    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:17.607328    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:17.607376    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:17.619118    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:17.619177    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:17.631091    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:17.631160    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:17.644145    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:17.644223    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:17.655891    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:17.655964    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:17.670134    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:17.670207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:17.681764    4988 logs.go:276] 0 containers: []
	W0731 15:12:17.681776    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:17.681832    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:17.694890    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:17.694909    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:17.694914    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:17.735209    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:17.735226    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:17.752056    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:17.752067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:17.767010    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:17.767028    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:17.779606    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:17.779620    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:17.819421    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:17.819437    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:17.837367    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:17.837379    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:17.851986    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:17.851997    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:17.864454    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:17.864467    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:17.888184    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:17.888198    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:17.910894    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:17.910908    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:17.928581    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:17.928596    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:17.945687    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:17.945702    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:17.960095    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:17.960109    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:17.964351    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:17.964357    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:18.001789    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:18.001803    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:20.394899    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:20.518271    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:25.395336    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:25.395586    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:25.416062    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:25.416156    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:25.430815    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:25.430882    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:25.442596    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:25.442672    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:25.453049    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:25.453115    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:25.470517    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:25.470587    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:25.480862    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:25.480928    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:25.491206    4804 logs.go:276] 0 containers: []
	W0731 15:12:25.491217    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:25.491270    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:25.502062    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:25.502078    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:25.502084    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:25.513962    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:25.513973    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:25.538595    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:25.538614    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:25.576636    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:25.576647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:25.591792    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:25.591807    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:25.607277    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:25.607293    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:25.619795    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:25.619810    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:25.633030    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:25.633041    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:25.670946    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:25.670961    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:25.675842    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:25.675855    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:25.691133    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:25.691144    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:25.709785    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:25.709799    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:25.722593    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:25.722611    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:25.735210    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:25.735223    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:25.747418    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:25.747429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:28.260506    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:25.519003    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:25.519082    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:25.531075    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:25.531147    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:25.542421    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:25.542497    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:25.553756    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:25.553825    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:25.565043    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:25.565119    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:25.576169    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:25.576240    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:25.587519    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:25.587597    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:25.598774    4988 logs.go:276] 0 containers: []
	W0731 15:12:25.598786    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:25.598853    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:25.610280    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:25.610299    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:25.610306    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:25.629603    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:25.629620    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:25.652804    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:25.652815    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:25.666662    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:25.666673    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:25.681786    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:25.681801    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:25.686570    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:25.686580    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:25.730982    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:25.730995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:25.749378    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:25.749387    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:25.761560    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:25.761571    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:25.784871    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:25.784879    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:25.796960    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:25.796974    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:25.810591    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:25.810601    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:25.848785    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:25.848797    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:25.866741    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:25.866752    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:25.880868    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:25.880879    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:25.917239    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:25.917247    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:28.430257    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:33.262749    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:33.262939    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:33.279782    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:33.279861    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:33.293575    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:33.293649    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:33.306729    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:33.306801    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:33.317811    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:33.317877    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:33.328567    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:33.328641    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:33.348719    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:33.348792    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:33.359281    4804 logs.go:276] 0 containers: []
	W0731 15:12:33.359292    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:33.359353    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:33.369625    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:33.369643    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:33.369649    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:33.381576    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:33.381587    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:33.396232    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:33.396242    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:33.414323    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:33.414336    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:33.450542    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:33.450555    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:33.455403    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:33.455417    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:33.467707    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:33.467718    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:33.481333    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:33.481344    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:33.496426    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:33.496438    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:33.509175    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:33.509183    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:33.534509    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:33.534519    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:33.546825    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:33.546840    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:33.583283    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:33.583294    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:33.602737    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:33.602749    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:33.616449    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:33.616465    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:33.430886    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:33.430965    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:33.445051    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:33.445119    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:33.456550    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:33.456618    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:33.467680    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:33.467750    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:33.479314    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:33.479392    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:33.496428    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:33.496498    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:33.508855    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:33.508932    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:33.519783    4988 logs.go:276] 0 containers: []
	W0731 15:12:33.519794    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:33.519862    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:33.531020    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:33.531037    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:33.531043    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:33.552918    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:33.552930    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:33.566081    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:33.566094    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:33.612060    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:33.612072    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:33.651969    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:33.651982    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:33.666855    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:33.666867    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:33.677995    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:33.678009    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:33.699492    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:33.699505    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:33.711992    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:33.712008    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:33.725261    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:33.725274    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:33.737150    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:33.737163    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:33.774955    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:33.774965    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:33.796437    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:33.796448    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:33.808146    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:33.808157    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:33.812208    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:33.812216    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:33.833698    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:33.833705    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:36.131473    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:36.349327    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:41.133625    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:41.133768    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:41.170622    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:41.170702    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:41.192559    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:41.192635    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:41.206978    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:41.207050    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:41.218975    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:41.219048    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:41.229447    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:41.229524    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:41.243946    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:41.244016    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:41.254702    4804 logs.go:276] 0 containers: []
	W0731 15:12:41.254714    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:41.254769    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:41.267637    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:41.267654    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:41.267660    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:41.278979    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:41.278990    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:41.296764    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:41.296775    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:41.308943    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:41.308956    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:41.345349    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:41.345359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:41.359947    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:41.359962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:41.373551    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:41.373563    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:41.386729    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:41.386743    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:41.428436    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:41.428452    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:41.443760    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:41.443772    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:41.456634    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:41.456646    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:41.469703    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:41.469716    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:41.489865    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:41.489881    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:41.502437    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:41.502449    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:41.506907    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:41.506915    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:41.351440    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:41.351545    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:41.362961    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:41.363038    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:41.376187    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:41.376264    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:41.388188    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:41.388265    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:41.400233    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:41.400308    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:41.410948    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:41.411019    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:41.421751    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:41.421825    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:41.432348    4988 logs.go:276] 0 containers: []
	W0731 15:12:41.432360    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:41.432424    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:41.444978    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:41.444994    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:41.444999    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:41.449569    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:41.449582    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:41.462131    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:41.462145    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:41.478895    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:41.478906    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:41.519422    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:41.519432    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:41.556761    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:41.556773    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:41.568626    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:41.568638    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:41.580144    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:41.580155    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:41.597753    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:41.597764    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:41.611795    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:41.611805    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:41.634924    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:41.634932    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:41.650796    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:41.650809    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:41.686384    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:41.686394    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:41.700655    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:41.700664    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:41.715314    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:41.715327    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:41.740909    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:41.740921    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:44.255683    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:44.034417    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:49.257871    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:49.257967    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:49.269351    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:49.269430    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:49.280418    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:49.280497    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:49.291520    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:49.291591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:49.303324    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:49.303389    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:49.317895    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:49.317962    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:49.329770    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:49.329839    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:49.341525    4988 logs.go:276] 0 containers: []
	W0731 15:12:49.341537    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:49.341590    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:49.357433    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:49.357449    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:49.357456    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:49.383007    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:49.383019    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:49.398255    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:49.398267    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:49.435648    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:49.435662    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:49.474853    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:49.474865    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:49.496393    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:49.496408    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:49.514566    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:49.514577    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:49.525745    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:49.525757    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:49.539459    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:49.539469    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:49.551463    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:49.551474    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:49.565400    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:49.565411    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:49.588046    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:49.588056    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:49.599505    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:49.599516    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:49.603610    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:49.603615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:49.615312    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:49.615322    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:49.627641    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:49.627653    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:49.036779    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:49.037222    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:49.080264    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:49.080393    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:49.099680    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:49.099776    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:49.114150    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:49.114227    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:49.126177    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:49.126250    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:49.141871    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:49.141948    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:49.153511    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:49.153584    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:49.163974    4804 logs.go:276] 0 containers: []
	W0731 15:12:49.163984    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:49.164040    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:49.177582    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:49.177600    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:49.177606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:49.190072    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:49.190083    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:49.208522    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:49.208533    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:49.245228    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:49.245240    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:49.287059    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:49.287070    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:49.302268    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:49.302280    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:49.315068    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:49.315080    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:49.327119    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:49.327132    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:49.350860    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:49.350876    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:49.377854    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:49.377871    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:49.390911    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:49.390923    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:49.395792    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:49.395803    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:49.416012    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:49.416024    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:49.428610    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:49.428621    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:49.441774    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:49.441788    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:51.956317    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:52.169020    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:56.958659    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:56.958949    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:56.977179    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:12:56.977266    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:56.990464    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:12:56.990528    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:57.001308    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:12:57.001381    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:57.017306    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:12:57.017375    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:57.031922    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:12:57.031990    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:57.041904    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:12:57.041971    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:57.051953    4804 logs.go:276] 0 containers: []
	W0731 15:12:57.051964    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:57.052014    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:57.062523    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:12:57.062540    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:12:57.062545    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:12:57.081723    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:12:57.081736    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:12:57.093632    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:57.093646    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:57.131504    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:57.131514    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:57.135859    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:12:57.135868    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:12:57.147347    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:12:57.147361    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:12:57.165613    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:12:57.165624    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:12:57.178164    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:12:57.178177    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:57.190860    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:57.190875    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:57.238246    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:12:57.238263    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:12:57.254788    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:12:57.254806    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:12:57.276564    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:12:57.276577    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:12:57.292087    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:12:57.292097    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:12:57.305277    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:57.305288    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:57.330579    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:12:57.330588    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:12:57.171155    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:57.171248    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:57.183091    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:57.183179    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:57.195325    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:57.195404    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:57.210949    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:57.211024    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:57.226844    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:57.226924    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:57.247147    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:57.247230    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:57.266525    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:57.266606    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:57.278205    4988 logs.go:276] 0 containers: []
	W0731 15:12:57.278216    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:57.278295    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:57.290171    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:57.290186    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:57.290191    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:57.312543    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:57.312553    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:57.352324    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:57.352339    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:57.388862    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:57.388875    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:57.426062    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:57.426072    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:57.438570    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:57.438582    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:57.450271    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:57.450281    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:57.462217    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:57.462228    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:57.480030    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:57.480041    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:57.501607    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:57.501616    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:57.513748    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:57.513760    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:57.518353    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:57.518360    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:57.538963    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:57.538976    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:57.553535    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:57.553544    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:57.568246    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:57.568257    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:57.579891    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:57.579901    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:59.844794    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:00.095379    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:04.847149    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:04.847399    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:04.872098    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:04.872197    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:04.888115    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:04.888200    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:04.901322    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:04.901403    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:04.914082    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:04.914154    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:04.924723    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:04.924795    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:04.935589    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:04.935661    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:04.945772    4804 logs.go:276] 0 containers: []
	W0731 15:13:04.945783    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:04.945842    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:04.956563    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:04.956579    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:04.956584    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:04.972682    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:04.972696    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:04.983874    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:04.983885    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:05.007586    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:05.007594    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:05.021851    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:05.021862    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:05.033059    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:05.033068    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:05.050801    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:05.050813    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:05.062581    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:05.062592    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:05.075330    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:05.075342    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:05.090333    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:05.090343    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:05.103800    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:05.103818    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:05.116742    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:05.116752    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:05.137591    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:05.137609    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:05.177041    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:05.177061    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:05.182124    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:05.182133    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:07.726182    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:05.097626    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:05.097729    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:05.116679    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:13:05.116757    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:05.148163    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:13:05.148239    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:05.158757    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:13:05.158831    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:05.169762    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:13:05.169834    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:05.181217    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:13:05.181299    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:05.192593    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:13:05.192669    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:05.203713    4988 logs.go:276] 0 containers: []
	W0731 15:13:05.203724    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:05.203789    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:05.216524    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:13:05.216546    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:13:05.216552    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:13:05.239001    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:13:05.239015    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:13:05.253108    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:13:05.253119    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:05.265013    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:05.265024    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:05.301675    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:13:05.301683    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:13:05.338876    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:13:05.338887    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:13:05.353001    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:13:05.353012    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:13:05.367337    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:13:05.367348    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:13:05.390465    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:13:05.390476    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:13:05.401818    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:05.401830    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:05.423541    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:05.423550    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:05.427503    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:05.427513    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:05.465143    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:13:05.465154    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:13:05.483625    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:13:05.483635    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:13:05.498691    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:13:05.498701    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:13:05.513498    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:13:05.513510    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:13:08.030500    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:12.728614    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:12.728876    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:12.754113    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:12.754240    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:12.770542    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:12.770618    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:12.784274    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:12.784346    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:12.795547    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:12.795616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:12.806451    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:12.806519    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:12.822654    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:12.822719    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:12.832829    4804 logs.go:276] 0 containers: []
	W0731 15:13:12.832842    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:12.832906    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:12.843186    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:12.843204    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:12.843210    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:12.855015    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:12.855025    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:12.869901    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:12.869911    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:12.881706    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:12.881720    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:12.899587    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:12.899597    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:12.924404    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:12.924411    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:12.937903    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:12.937913    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:12.952128    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:12.952141    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:12.963764    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:12.963775    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:12.968982    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:12.968988    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:12.981047    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:12.981059    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:12.998840    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:12.998850    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:13.036885    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:13.036894    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:13.074146    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:13.074162    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:13.086690    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:13.086702    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:13.032744    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:13.032794    4988 kubeadm.go:597] duration metric: took 4m4.059469458s to restartPrimaryControlPlane
	W0731 15:13:13.032897    4988 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 15:13:13.032913    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 15:13:14.062644    4988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.029736792s)
	I0731 15:13:14.062697    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 15:13:14.067899    4988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:13:14.070570    4988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:13:14.073503    4988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:13:14.073509    4988 kubeadm.go:157] found existing configuration files:
	
	I0731 15:13:14.073534    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0731 15:13:14.075929    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:13:14.075954    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:13:14.078697    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0731 15:13:14.081515    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:13:14.081541    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:13:14.083996    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0731 15:13:14.086527    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:13:14.086547    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:13:14.089701    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0731 15:13:14.092184    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:13:14.092203    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:13:14.094872    4988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 15:13:14.110781    4988 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 15:13:14.110822    4988 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 15:13:14.158912    4988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 15:13:14.158997    4988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 15:13:14.159060    4988 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 15:13:14.208842    4988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 15:13:14.216980    4988 out.go:204]   - Generating certificates and keys ...
	I0731 15:13:14.217037    4988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 15:13:14.217090    4988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 15:13:14.217165    4988 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 15:13:14.217195    4988 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 15:13:14.217227    4988 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 15:13:14.217288    4988 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 15:13:14.217317    4988 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 15:13:14.217368    4988 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 15:13:14.217411    4988 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 15:13:14.217447    4988 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 15:13:14.217468    4988 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 15:13:14.217495    4988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 15:13:14.374424    4988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 15:13:14.479400    4988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 15:13:14.574679    4988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 15:13:14.694336    4988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 15:13:14.727013    4988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 15:13:14.727442    4988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 15:13:14.727475    4988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 15:13:14.796087    4988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 15:13:15.601448    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:14.799836    4988 out.go:204]   - Booting up control plane ...
	I0731 15:13:14.799885    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 15:13:14.799931    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 15:13:14.799964    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 15:13:14.800004    4988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 15:13:14.800099    4988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 15:13:19.301378    4988 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503107 seconds
	I0731 15:13:19.301434    4988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 15:13:19.304807    4988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 15:13:19.822373    4988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 15:13:19.822769    4988 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-609000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 15:13:20.325573    4988 kubeadm.go:310] [bootstrap-token] Using token: 464iif.j93mcmeumustwbfb
	I0731 15:13:20.331600    4988 out.go:204]   - Configuring RBAC rules ...
	I0731 15:13:20.331663    4988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 15:13:20.331717    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 15:13:20.338308    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 15:13:20.339198    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 15:13:20.340097    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 15:13:20.341031    4988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 15:13:20.344438    4988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 15:13:20.518234    4988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 15:13:20.729786    4988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 15:13:20.730504    4988 kubeadm.go:310] 
	I0731 15:13:20.730533    4988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 15:13:20.730538    4988 kubeadm.go:310] 
	I0731 15:13:20.730575    4988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 15:13:20.730580    4988 kubeadm.go:310] 
	I0731 15:13:20.730592    4988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 15:13:20.730625    4988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 15:13:20.730656    4988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 15:13:20.730659    4988 kubeadm.go:310] 
	I0731 15:13:20.730688    4988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 15:13:20.730709    4988 kubeadm.go:310] 
	I0731 15:13:20.730733    4988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 15:13:20.730736    4988 kubeadm.go:310] 
	I0731 15:13:20.730872    4988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 15:13:20.730984    4988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 15:13:20.731026    4988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 15:13:20.731031    4988 kubeadm.go:310] 
	I0731 15:13:20.731137    4988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 15:13:20.731286    4988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 15:13:20.731297    4988 kubeadm.go:310] 
	I0731 15:13:20.731409    4988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 464iif.j93mcmeumustwbfb \
	I0731 15:13:20.731466    4988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a \
	I0731 15:13:20.731479    4988 kubeadm.go:310] 	--control-plane 
	I0731 15:13:20.731482    4988 kubeadm.go:310] 
	I0731 15:13:20.731523    4988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 15:13:20.731527    4988 kubeadm.go:310] 
	I0731 15:13:20.731583    4988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 464iif.j93mcmeumustwbfb \
	I0731 15:13:20.731635    4988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a 
	I0731 15:13:20.732006    4988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 15:13:20.732155    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:13:20.732164    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:13:20.735915    4988 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 15:13:20.740281    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 15:13:20.743421    4988 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 15:13:20.749719    4988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 15:13:20.749824    4988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 15:13:20.749852    4988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-609000 minikube.k8s.io/updated_at=2024_07_31T15_13_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=stopped-upgrade-609000 minikube.k8s.io/primary=true
	I0731 15:13:20.804957    4988 ops.go:34] apiserver oom_adj: -16
	I0731 15:13:20.804971    4988 kubeadm.go:1113] duration metric: took 55.23225ms to wait for elevateKubeSystemPrivileges
	I0731 15:13:20.804977    4988 kubeadm.go:394] duration metric: took 4m11.845083292s to StartCluster
	I0731 15:13:20.804987    4988 settings.go:142] acquiring lock: {Name:mk4ba9457258541473c3bcf6c2e4b75027bd146e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:13:20.805080    4988 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:13:20.805484    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:13:20.805702    4988 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:13:20.805733    4988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 15:13:20.805776    4988 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-609000"
	I0731 15:13:20.805791    4988 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-609000"
	W0731 15:13:20.805794    4988 addons.go:243] addon storage-provisioner should already be in state true
	I0731 15:13:20.805807    4988 host.go:66] Checking if "stopped-upgrade-609000" exists ...
	I0731 15:13:20.805786    4988 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-609000"
	I0731 15:13:20.805836    4988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-609000"
	I0731 15:13:20.805940    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:13:20.807133    4988 kapi.go:59] client config for stopped-upgrade-609000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101950700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:13:20.807264    4988 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-609000"
	W0731 15:13:20.807270    4988 addons.go:243] addon default-storageclass should already be in state true
	I0731 15:13:20.807278    4988 host.go:66] Checking if "stopped-upgrade-609000" exists ...
	I0731 15:13:20.808979    4988 out.go:177] * Verifying Kubernetes components...
	I0731 15:13:20.809508    4988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 15:13:20.811989    4988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 15:13:20.812005    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:13:20.817893    4988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:13:20.603572    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:20.603698    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:20.615724    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:20.615807    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:20.628063    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:20.628143    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:20.642644    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:20.642726    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:20.654102    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:20.654168    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:20.665587    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:20.665657    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:20.675683    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:20.675746    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:20.686976    4804 logs.go:276] 0 containers: []
	W0731 15:13:20.686987    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:20.687052    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:20.698339    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:20.698354    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:20.698359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:20.717291    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:20.717302    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:20.728670    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:20.728682    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:20.741088    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:20.741097    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:20.753680    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:20.753692    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:20.793711    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:20.793728    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:20.806817    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:20.806826    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:20.819010    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:20.819021    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:20.831527    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:20.831539    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:20.843779    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:20.843790    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:20.848602    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:20.848613    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:20.885110    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:20.885123    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:20.900272    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:20.900282    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:20.919003    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:20.919017    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:20.934871    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:20.934887    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:23.462001    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:20.823927    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:13:20.827927    4988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:13:20.827938    4988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 15:13:20.827947    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:13:20.899567    4988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:13:20.905890    4988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:13:20.905959    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:13:20.909169    4988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 15:13:20.913525    4988 api_server.go:72] duration metric: took 107.806833ms to wait for apiserver process to appear ...
	I0731 15:13:20.913537    4988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:13:20.913546    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:20.939004    4988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:13:28.464089    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:28.464200    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:28.475371    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:28.475453    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:28.486393    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:28.486465    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:28.497691    4804 logs.go:276] 4 containers: [eacaa92db7e0 75305c810552 89c29c2f0f0a 6c66c259b7f1]
	I0731 15:13:28.497759    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:28.508869    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:28.508939    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:28.519861    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:28.519924    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:28.531092    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:28.531166    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:28.542523    4804 logs.go:276] 0 containers: []
	W0731 15:13:28.542533    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:28.542597    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:28.555313    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:28.555330    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:28.555335    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:28.569662    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:28.569674    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:28.580846    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:28.580856    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:28.598679    4804 logs.go:123] Gathering logs for coredns [89c29c2f0f0a] ...
	I0731 15:13:28.598690    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89c29c2f0f0a"
	I0731 15:13:28.610625    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:28.610635    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:28.627085    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:28.627095    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:28.639416    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:28.639427    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:28.644127    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:28.644136    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:28.684366    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:28.684381    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:28.699786    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:28.699800    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:28.711616    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:28.711630    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:28.723310    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:28.723324    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:28.759698    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:28.759706    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:28.771362    4804 logs.go:123] Gathering logs for coredns [6c66c259b7f1] ...
	I0731 15:13:28.771376    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c66c259b7f1"
	I0731 15:13:28.782550    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:28.782561    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:25.915554    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:25.915589    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:31.308891    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:30.915766    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:30.915818    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:36.309604    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:36.309819    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:36.326319    4804 logs.go:276] 1 containers: [2f1fc478ef6c]
	I0731 15:13:36.326407    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:36.338640    4804 logs.go:276] 1 containers: [21fc5079a8db]
	I0731 15:13:36.338718    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:36.350582    4804 logs.go:276] 4 containers: [e1c601a4adb4 57c66d79a419 eacaa92db7e0 75305c810552]
	I0731 15:13:36.350659    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:36.360698    4804 logs.go:276] 1 containers: [fe598b29f2aa]
	I0731 15:13:36.360770    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:36.371109    4804 logs.go:276] 1 containers: [6c2e4e54eafc]
	I0731 15:13:36.371174    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:36.382042    4804 logs.go:276] 1 containers: [e23e28709e4e]
	I0731 15:13:36.382114    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:36.391996    4804 logs.go:276] 0 containers: []
	W0731 15:13:36.392006    4804 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:36.392087    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:36.402944    4804 logs.go:276] 1 containers: [0766f4b0d8f9]
	I0731 15:13:36.402962    4804 logs.go:123] Gathering logs for coredns [75305c810552] ...
	I0731 15:13:36.402967    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75305c810552"
	I0731 15:13:36.414468    4804 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:36.414478    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:36.452158    4804 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:36.452173    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:36.456971    4804 logs.go:123] Gathering logs for container status ...
	I0731 15:13:36.456979    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:36.468368    4804 logs.go:123] Gathering logs for kube-apiserver [2f1fc478ef6c] ...
	I0731 15:13:36.468379    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f1fc478ef6c"
	I0731 15:13:36.482523    4804 logs.go:123] Gathering logs for kube-scheduler [fe598b29f2aa] ...
	I0731 15:13:36.482533    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe598b29f2aa"
	I0731 15:13:36.497473    4804 logs.go:123] Gathering logs for kube-controller-manager [e23e28709e4e] ...
	I0731 15:13:36.497490    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23e28709e4e"
	I0731 15:13:36.516252    4804 logs.go:123] Gathering logs for storage-provisioner [0766f4b0d8f9] ...
	I0731 15:13:36.516263    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0766f4b0d8f9"
	I0731 15:13:36.528496    4804 logs.go:123] Gathering logs for coredns [e1c601a4adb4] ...
	I0731 15:13:36.528507    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1c601a4adb4"
	I0731 15:13:36.539938    4804 logs.go:123] Gathering logs for coredns [eacaa92db7e0] ...
	I0731 15:13:36.539955    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eacaa92db7e0"
	I0731 15:13:36.555728    4804 logs.go:123] Gathering logs for coredns [57c66d79a419] ...
	I0731 15:13:36.555738    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57c66d79a419"
	I0731 15:13:36.569914    4804 logs.go:123] Gathering logs for kube-proxy [6c2e4e54eafc] ...
	I0731 15:13:36.569926    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2e4e54eafc"
	I0731 15:13:36.581163    4804 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:36.581174    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:36.604480    4804 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:36.604488    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:36.640257    4804 logs.go:123] Gathering logs for etcd [21fc5079a8db] ...
	I0731 15:13:36.640268    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21fc5079a8db"
	I0731 15:13:35.916031    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:35.916074    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:39.156293    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:44.156696    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:44.160161    4804 out.go:177] 
	W0731 15:13:44.164156    4804 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 15:13:44.164162    4804 out.go:239] * 
	W0731 15:13:44.164663    4804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:13:44.179083    4804 out.go:177] 
	I0731 15:13:40.916445    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:40.916498    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:45.916987    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:45.917027    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:50.917874    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:50.917919    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 15:13:51.263127    4988 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 15:13:51.265925    4988 out.go:177] * Enabled addons: storage-provisioner
	I0731 15:13:51.276837    4988 addons.go:510] duration metric: took 30.471595833s for enable addons: enabled=[storage-provisioner]
	I0731 15:13:55.918946    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:55.918979    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-31 22:04:58 UTC, ends at Wed 2024-07-31 22:14:00 UTC. --
	Jul 31 22:13:37 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 22:13:42 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 22:13:44 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:44Z" level=error msg="ContainerStats resp: {0x400089b6c0 linux}"
	Jul 31 22:13:44 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:44Z" level=error msg="ContainerStats resp: {0x400089bb40 linux}"
	Jul 31 22:13:45 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:45Z" level=error msg="ContainerStats resp: {0x40004e5d00 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x4000268e00 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x4000268f80 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x4000269780 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x40007c5fc0 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x4000776480 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x4000777080 linux}"
	Jul 31 22:13:46 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:46Z" level=error msg="ContainerStats resp: {0x40007774c0 linux}"
	Jul 31 22:13:47 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 22:13:52 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 22:13:56 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:56Z" level=error msg="ContainerStats resp: {0x40004e5200 linux}"
	Jul 31 22:13:56 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:56Z" level=error msg="ContainerStats resp: {0x40002439c0 linux}"
	Jul 31 22:13:57 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:57Z" level=error msg="ContainerStats resp: {0x40008f64c0 linux}"
	Jul 31 22:13:57 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x40008f7a00 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x4000268600 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x4000268f40 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x4000269400 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x40007c5140 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x40007c5540 linux}"
	Jul 31 22:13:58 running-upgrade-683000 cri-dockerd[2674]: time="2024-07-31T22:13:58Z" level=error msg="ContainerStats resp: {0x40007764c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e1c601a4adb44       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   f6bef9bffe948
	57c66d79a419b       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   2ad3afa774014
	eacaa92db7e0d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2ad3afa774014
	75305c810552d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f6bef9bffe948
	6c2e4e54eafce       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   5a5fb86ce5443
	0766f4b0d8f9c       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   36016a9cf6f91
	2f1fc478ef6c8       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   672541d6bff2c
	21fc5079a8db1       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c486af2fd563f
	e23e28709e4e6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   1547d5420bb21
	fe598b29f2aa7       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e724c0e6fa277
	
	
	==> coredns [57c66d79a419] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:39529->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:52658->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:45831->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:54459->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:47737->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:58447->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5204463914347516733.1788340987626305661. HINFO: read udp 10.244.0.2:56998->10.0.2.3:53: i/o timeout
	
	
	==> coredns [75305c810552] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:48657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:33837->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:41767->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:47324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:60630->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:36118->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:55182->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:50301->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:53994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2076017628597663021.9126477691025918657. HINFO: read udp 10.244.0.3:35711->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e1c601a4adb4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:40186->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:57031->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:37486->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:34977->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:55914->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:43835->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5683914000473910060.2221240755092806295. HINFO: read udp 10.244.0.3:56265->10.0.2.3:53: i/o timeout
	
	
	==> coredns [eacaa92db7e0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:36397->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:52775->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:44864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:44262->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:39466->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:43186->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:53965->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:56752->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:51743->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 332605979926625080.7535456651718991115. HINFO: read udp 10.244.0.2:48422->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-683000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-683000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=running-upgrade-683000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T15_09_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-683000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:13:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:09:43 +0000   Wed, 31 Jul 2024 22:09:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:09:43 +0000   Wed, 31 Jul 2024 22:09:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:09:43 +0000   Wed, 31 Jul 2024 22:09:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:09:43 +0000   Wed, 31 Jul 2024 22:09:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-683000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 fba6b5bf8f734145b1a553ff4195cf52
	  System UUID:                fba6b5bf8f734145b1a553ff4195cf52
	  Boot ID:                    5918cdc1-1b82-47ad-b6b4-5ada4238f0af
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jjmnv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-qgthj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-683000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-683000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-683000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-lrj9k                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-683000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-683000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-683000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-683000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-683000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-683000 event: Registered Node running-upgrade-683000 in Controller
	
	
	==> dmesg <==
	[  +1.327100] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.076906] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.086794] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +0.185156] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.081241] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.294305] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[  +0.313298] kauditd_printk_skb: 92 callbacks suppressed
	[  +8.830542] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.457442] systemd-fstab-generator[2195]: Ignoring "noauto" for root device
	[  +0.141738] systemd-fstab-generator[2229]: Ignoring "noauto" for root device
	[  +0.099647] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.101555] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +1.579225] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.113787] systemd-fstab-generator[2631]: Ignoring "noauto" for root device
	[  +0.067330] systemd-fstab-generator[2642]: Ignoring "noauto" for root device
	[  +0.083102] systemd-fstab-generator[2653]: Ignoring "noauto" for root device
	[  +0.091951] systemd-fstab-generator[2667]: Ignoring "noauto" for root device
	[  +2.315876] systemd-fstab-generator[2822]: Ignoring "noauto" for root device
	[  +2.103094] systemd-fstab-generator[3249]: Ignoring "noauto" for root device
	[  +1.414096] systemd-fstab-generator[3586]: Ignoring "noauto" for root device
	[ +19.297762] kauditd_printk_skb: 68 callbacks suppressed
	[Jul31 22:09] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.306325] systemd-fstab-generator[10888]: Ignoring "noauto" for root device
	[  +5.643644] systemd-fstab-generator[11517]: Ignoring "noauto" for root device
	[  +0.469134] systemd-fstab-generator[11652]: Ignoring "noauto" for root device
	
	
	==> etcd [21fc5079a8db] <==
	{"level":"info","ts":"2024-07-31T22:09:38.291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-31T22:09:38.293Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-31T22:09:38.292Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T22:09:38.293Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T22:09:38.294Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T22:09:38.294Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T22:09:38.294Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T22:09:39.077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T22:09:39.078Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-683000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T22:09:39.078Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:09:39.078Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T22:09:39.078Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:09:39.078Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T22:09:39.082Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-31T22:09:39.082Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T22:09:39.082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T22:09:39.083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:09:39.083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T22:09:39.083Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:14:00 up 9 min,  0 users,  load average: 0.54, 0.37, 0.18
	Linux running-upgrade-683000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2f1fc478ef6c] <==
	I0731 22:09:40.300218       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 22:09:40.301871       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 22:09:40.301896       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 22:09:40.304879       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 22:09:40.326457       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 22:09:40.326464       1 cache.go:39] Caches are synced for autoregister controller
	I0731 22:09:40.341411       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 22:09:41.038733       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 22:09:41.208437       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 22:09:41.211068       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 22:09:41.211083       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 22:09:41.326622       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 22:09:41.336302       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 22:09:41.384250       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 22:09:41.386185       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0731 22:09:41.386600       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 22:09:41.388227       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 22:09:42.357632       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 22:09:43.010132       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 22:09:43.013966       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 22:09:43.032155       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 22:09:43.071621       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 22:09:56.012815       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0731 22:09:56.062353       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0731 22:09:56.616249       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e23e28709e4e] <==
	I0731 22:09:55.189301       1 shared_informer.go:262] Caches are synced for service account
	I0731 22:09:55.191465       1 shared_informer.go:262] Caches are synced for job
	I0731 22:09:55.207648       1 shared_informer.go:262] Caches are synced for TTL
	I0731 22:09:55.207681       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 22:09:55.207721       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0731 22:09:55.207730       1 shared_informer.go:262] Caches are synced for PVC protection
	I0731 22:09:55.208835       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0731 22:09:55.211877       1 shared_informer.go:262] Caches are synced for namespace
	I0731 22:09:55.215873       1 shared_informer.go:262] Caches are synced for deployment
	I0731 22:09:55.260786       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 22:09:55.274236       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 22:09:55.363056       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 22:09:55.366494       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 22:09:55.367597       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 22:09:55.367611       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 22:09:55.367641       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 22:09:55.378274       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0731 22:09:55.412509       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 22:09:55.827830       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 22:09:55.857282       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 22:09:55.857293       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 22:09:56.013979       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0731 22:09:56.065432       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lrj9k"
	I0731 22:09:56.217938       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qgthj"
	I0731 22:09:56.226807       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-jjmnv"
	
	
	==> kube-proxy [6c2e4e54eafc] <==
	I0731 22:09:56.564485       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0731 22:09:56.564574       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0731 22:09:56.564629       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 22:09:56.607563       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 22:09:56.607575       1 server_others.go:206] "Using iptables Proxier"
	I0731 22:09:56.607902       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 22:09:56.609526       1 server.go:661] "Version info" version="v1.24.1"
	I0731 22:09:56.609591       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:09:56.610169       1 config.go:317] "Starting service config controller"
	I0731 22:09:56.610255       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 22:09:56.610398       1 config.go:226] "Starting endpoint slice config controller"
	I0731 22:09:56.610419       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 22:09:56.613771       1 config.go:444] "Starting node config controller"
	I0731 22:09:56.614068       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 22:09:56.711016       1 shared_informer.go:262] Caches are synced for service config
	I0731 22:09:56.711053       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 22:09:56.714415       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [fe598b29f2aa] <==
	W0731 22:09:40.275332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 22:09:40.275351       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 22:09:40.275481       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:09:40.275501       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:09:40.276092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:09:40.276118       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 22:09:40.276166       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:09:40.276195       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:09:40.276229       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:09:40.276258       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:09:40.276293       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:09:40.276309       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:09:40.276370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:09:40.276399       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:09:40.276444       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:09:40.276461       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 22:09:40.276490       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 22:09:40.276524       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 22:09:40.276555       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:09:40.276571       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:09:41.094970       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 22:09:41.095009       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 22:09:41.174399       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 22:09:41.174468       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0731 22:09:41.459433       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-31 22:04:58 UTC, ends at Wed 2024-07-31 22:14:00 UTC. --
	Jul 31 22:09:44 running-upgrade-683000 kubelet[11523]: I0731 22:09:44.478141   11523 reconciler.go:157] "Reconciler: start to sync state"
	Jul 31 22:09:44 running-upgrade-683000 kubelet[11523]: E0731 22:09:44.643319   11523 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-683000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-683000"
	Jul 31 22:09:44 running-upgrade-683000 kubelet[11523]: E0731 22:09:44.845337   11523 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-683000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-683000"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: I0731 22:09:55.177886   11523 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: I0731 22:09:55.266847   11523 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: I0731 22:09:55.266993   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d228a588-2ab2-4db5-9750-c77df6ef669c-tmp\") pod \"storage-provisioner\" (UID: \"d228a588-2ab2-4db5-9750-c77df6ef669c\") " pod="kube-system/storage-provisioner"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: I0731 22:09:55.267009   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jct\" (UniqueName: \"kubernetes.io/projected/d228a588-2ab2-4db5-9750-c77df6ef669c-kube-api-access-j4jct\") pod \"storage-provisioner\" (UID: \"d228a588-2ab2-4db5-9750-c77df6ef669c\") " pod="kube-system/storage-provisioner"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: I0731 22:09:55.267220   11523 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: E0731 22:09:55.370813   11523 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: E0731 22:09:55.370832   11523 projected.go:192] Error preparing data for projected volume kube-api-access-j4jct for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 31 22:09:55 running-upgrade-683000 kubelet[11523]: E0731 22:09:55.370864   11523 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d228a588-2ab2-4db5-9750-c77df6ef669c-kube-api-access-j4jct podName:d228a588-2ab2-4db5-9750-c77df6ef669c nodeName:}" failed. No retries permitted until 2024-07-31 22:09:55.87085187 +0000 UTC m=+12.874792934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j4jct" (UniqueName: "kubernetes.io/projected/d228a588-2ab2-4db5-9750-c77df6ef669c-kube-api-access-j4jct") pod "storage-provisioner" (UID: "d228a588-2ab2-4db5-9750-c77df6ef669c") : configmap "kube-root-ca.crt" not found
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.067552   11523 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.175424   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a5ff811-d7df-4528-8efb-e26348ebbd4f-kube-proxy\") pod \"kube-proxy-lrj9k\" (UID: \"0a5ff811-d7df-4528-8efb-e26348ebbd4f\") " pod="kube-system/kube-proxy-lrj9k"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.175573   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5ff811-d7df-4528-8efb-e26348ebbd4f-xtables-lock\") pod \"kube-proxy-lrj9k\" (UID: \"0a5ff811-d7df-4528-8efb-e26348ebbd4f\") " pod="kube-system/kube-proxy-lrj9k"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.175584   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5ff811-d7df-4528-8efb-e26348ebbd4f-lib-modules\") pod \"kube-proxy-lrj9k\" (UID: \"0a5ff811-d7df-4528-8efb-e26348ebbd4f\") " pod="kube-system/kube-proxy-lrj9k"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.175595   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l64p\" (UniqueName: \"kubernetes.io/projected/0a5ff811-d7df-4528-8efb-e26348ebbd4f-kube-api-access-7l64p\") pod \"kube-proxy-lrj9k\" (UID: \"0a5ff811-d7df-4528-8efb-e26348ebbd4f\") " pod="kube-system/kube-proxy-lrj9k"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.211562   11523 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="36016a9cf6f91e3904c49d7993e786bf400bfc18f8a4cddbf95b8358cb8a146f"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.225669   11523 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.246252   11523 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.377281   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c39e6caf-b5c9-4c45-90c0-e7ddc404dc90-config-volume\") pod \"coredns-6d4b75cb6d-qgthj\" (UID: \"c39e6caf-b5c9-4c45-90c0-e7ddc404dc90\") " pod="kube-system/coredns-6d4b75cb6d-qgthj"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.377329   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9091adc8-b2e1-43fd-94d4-5c11d061d68d-config-volume\") pod \"coredns-6d4b75cb6d-jjmnv\" (UID: \"9091adc8-b2e1-43fd-94d4-5c11d061d68d\") " pod="kube-system/coredns-6d4b75cb6d-jjmnv"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.377347   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94krz\" (UniqueName: \"kubernetes.io/projected/c39e6caf-b5c9-4c45-90c0-e7ddc404dc90-kube-api-access-94krz\") pod \"coredns-6d4b75cb6d-qgthj\" (UID: \"c39e6caf-b5c9-4c45-90c0-e7ddc404dc90\") " pod="kube-system/coredns-6d4b75cb6d-qgthj"
	Jul 31 22:09:56 running-upgrade-683000 kubelet[11523]: I0731 22:09:56.377367   11523 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjvnv\" (UniqueName: \"kubernetes.io/projected/9091adc8-b2e1-43fd-94d4-5c11d061d68d-kube-api-access-zjvnv\") pod \"coredns-6d4b75cb6d-jjmnv\" (UID: \"9091adc8-b2e1-43fd-94d4-5c11d061d68d\") " pod="kube-system/coredns-6d4b75cb6d-jjmnv"
	Jul 31 22:13:34 running-upgrade-683000 kubelet[11523]: I0731 22:13:34.419013   11523 scope.go:110] "RemoveContainer" containerID="6c66c259b7f1491221674789a7352df90f87dfcb9487fced3dea491ddb861ef9"
	Jul 31 22:13:34 running-upgrade-683000 kubelet[11523]: I0731 22:13:34.445368   11523 scope.go:110] "RemoveContainer" containerID="89c29c2f0f0a3609537d18eed9b52acd5543830c81a881071ae4366ea8185e07"
	
	
	==> storage-provisioner [0766f4b0d8f9] <==
	I0731 22:09:56.310149       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 22:09:56.315596       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 22:09:56.315655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 22:09:56.319124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 22:09:56.319290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-683000_96e2cb5c-6d8a-4565-9e0b-9d2286ec61f9!
	I0731 22:09:56.319905       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4dfe50b7-dd21-45c0-a0ff-b613ac062749", APIVersion:"v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-683000_96e2cb5c-6d8a-4565-9e0b-9d2286ec61f9 became leader
	I0731 22:09:56.420003       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-683000_96e2cb5c-6d8a-4565-9e0b-9d2286ec61f9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-683000 -n running-upgrade-683000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-683000 -n running-upgrade-683000: exit status 2 (15.664846375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-683000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-683000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-683000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-683000: (1.143352083s)
--- FAIL: TestRunningBinaryUpgrade (600.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
E0731 15:07:25.961927    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.832918375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-410000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:07:17.014374    4897 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:07:17.014496    4897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:07:17.014498    4897 out.go:304] Setting ErrFile to fd 2...
	I0731 15:07:17.014508    4897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:07:17.014685    4897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:07:17.015848    4897 out.go:298] Setting JSON to false
	I0731 15:07:17.032521    4897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4001,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:07:17.032592    4897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:07:17.037609    4897 out.go:177] * [kubernetes-upgrade-410000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:07:17.045538    4897 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:07:17.045611    4897 notify.go:220] Checking for updates...
	I0731 15:07:17.051429    4897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:07:17.054495    4897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:07:17.055773    4897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:07:17.058455    4897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:07:17.061462    4897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:07:17.064874    4897 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:07:17.064939    4897 config.go:182] Loaded profile config "running-upgrade-683000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:07:17.064981    4897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:07:17.069452    4897 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:07:17.076486    4897 start.go:297] selected driver: qemu2
	I0731 15:07:17.076495    4897 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:07:17.076502    4897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:07:17.078703    4897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:07:17.081454    4897 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:07:17.084600    4897 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 15:07:17.084641    4897 cni.go:84] Creating CNI manager for ""
	I0731 15:07:17.084651    4897 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 15:07:17.084687    4897 start.go:340] cluster config:
	{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:07:17.088246    4897 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:07:17.095446    4897 out.go:177] * Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	I0731 15:07:17.099445    4897 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 15:07:17.099459    4897 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 15:07:17.099471    4897 cache.go:56] Caching tarball of preloaded images
	I0731 15:07:17.099523    4897 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:07:17.099529    4897 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 15:07:17.099585    4897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kubernetes-upgrade-410000/config.json ...
	I0731 15:07:17.099598    4897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kubernetes-upgrade-410000/config.json: {Name:mk452944e007db88b710eff1b8377f3b8dadb9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:07:17.099819    4897 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:07:17.099852    4897 start.go:364] duration metric: took 25.959µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I0731 15:07:17.099864    4897 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:07:17.099905    4897 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:07:17.103602    4897 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:07:17.119475    4897 start.go:159] libmachine.API.Create for "kubernetes-upgrade-410000" (driver="qemu2")
	I0731 15:07:17.119504    4897 client.go:168] LocalClient.Create starting
	I0731 15:07:17.119568    4897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:07:17.119599    4897 main.go:141] libmachine: Decoding PEM data...
	I0731 15:07:17.119609    4897 main.go:141] libmachine: Parsing certificate...
	I0731 15:07:17.119658    4897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:07:17.119681    4897 main.go:141] libmachine: Decoding PEM data...
	I0731 15:07:17.119688    4897 main.go:141] libmachine: Parsing certificate...
	I0731 15:07:17.120106    4897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:07:17.272909    4897 main.go:141] libmachine: Creating SSH key...
	I0731 15:07:17.342758    4897 main.go:141] libmachine: Creating Disk image...
	I0731 15:07:17.342765    4897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:07:17.342969    4897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:17.352278    4897 main.go:141] libmachine: STDOUT: 
	I0731 15:07:17.352301    4897 main.go:141] libmachine: STDERR: 
	I0731 15:07:17.352355    4897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2 +20000M
	I0731 15:07:17.360406    4897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:07:17.360432    4897 main.go:141] libmachine: STDERR: 
	I0731 15:07:17.360447    4897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:17.360454    4897 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:07:17.360467    4897 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:07:17.360491    4897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:00:7c:8e:57:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:17.362200    4897 main.go:141] libmachine: STDOUT: 
	I0731 15:07:17.362215    4897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:07:17.362233    4897 client.go:171] duration metric: took 242.72675ms to LocalClient.Create
	I0731 15:07:19.364414    4897 start.go:128] duration metric: took 2.264507s to createHost
	I0731 15:07:19.364517    4897 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 2.264689458s
	W0731 15:07:19.364619    4897 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:07:19.376780    4897 out.go:177] * Deleting "kubernetes-upgrade-410000" in qemu2 ...
	W0731 15:07:19.406403    4897 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:07:19.406583    4897 start.go:729] Will try again in 5 seconds ...
	I0731 15:07:24.408785    4897 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:07:24.409430    4897 start.go:364] duration metric: took 494.417µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I0731 15:07:24.409505    4897 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:07:24.409784    4897 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:07:24.418427    4897 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:07:24.465784    4897 start.go:159] libmachine.API.Create for "kubernetes-upgrade-410000" (driver="qemu2")
	I0731 15:07:24.465829    4897 client.go:168] LocalClient.Create starting
	I0731 15:07:24.465950    4897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:07:24.466021    4897 main.go:141] libmachine: Decoding PEM data...
	I0731 15:07:24.466038    4897 main.go:141] libmachine: Parsing certificate...
	I0731 15:07:24.466094    4897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:07:24.466139    4897 main.go:141] libmachine: Decoding PEM data...
	I0731 15:07:24.466150    4897 main.go:141] libmachine: Parsing certificate...
	I0731 15:07:24.466716    4897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:07:24.625419    4897 main.go:141] libmachine: Creating SSH key...
	I0731 15:07:24.750834    4897 main.go:141] libmachine: Creating Disk image...
	I0731 15:07:24.750842    4897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:07:24.751038    4897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:24.760749    4897 main.go:141] libmachine: STDOUT: 
	I0731 15:07:24.760764    4897 main.go:141] libmachine: STDERR: 
	I0731 15:07:24.760824    4897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2 +20000M
	I0731 15:07:24.768679    4897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:07:24.768705    4897 main.go:141] libmachine: STDERR: 
	I0731 15:07:24.768716    4897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:24.768721    4897 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:07:24.768729    4897 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:07:24.768761    4897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:8f:ba:58:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:24.770444    4897 main.go:141] libmachine: STDOUT: 
	I0731 15:07:24.770457    4897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:07:24.770474    4897 client.go:171] duration metric: took 304.645417ms to LocalClient.Create
	I0731 15:07:26.772652    4897 start.go:128] duration metric: took 2.36286925s to createHost
	I0731 15:07:26.772755    4897 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 2.363336625s
	W0731 15:07:26.773131    4897 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:07:26.786784    4897 out.go:177] 
	W0731 15:07:26.789765    4897 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:07:26.789788    4897 out.go:239] * 
	* 
	W0731 15:07:26.792521    4897 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:07:26.803728    4897 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-410000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-410000: (1.89156475s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-410000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-410000 status --format={{.Host}}: exit status 7 (53.113375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179106375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:07:28.795005    4930 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:07:28.795173    4930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:07:28.795178    4930 out.go:304] Setting ErrFile to fd 2...
	I0731 15:07:28.795181    4930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:07:28.795318    4930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:07:28.796340    4930 out.go:298] Setting JSON to false
	I0731 15:07:28.812782    4930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4012,"bootTime":1722459636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:07:28.812854    4930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:07:28.816571    4930 out.go:177] * [kubernetes-upgrade-410000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:07:28.822575    4930 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:07:28.822660    4930 notify.go:220] Checking for updates...
	I0731 15:07:28.829550    4930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:07:28.832571    4930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:07:28.835541    4930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:07:28.838551    4930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:07:28.841597    4930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:07:28.843076    4930 config.go:182] Loaded profile config "kubernetes-upgrade-410000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 15:07:28.843311    4930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:07:28.847510    4930 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:07:28.854394    4930 start.go:297] selected driver: qemu2
	I0731 15:07:28.854401    4930 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:07:28.854462    4930 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:07:28.856885    4930 cni.go:84] Creating CNI manager for ""
	I0731 15:07:28.856900    4930 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:07:28.856927    4930 start.go:340] cluster config:
	{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-410000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:07:28.860422    4930 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:07:28.868554    4930 out.go:177] * Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	I0731 15:07:28.872522    4930 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 15:07:28.872537    4930 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 15:07:28.872548    4930 cache.go:56] Caching tarball of preloaded images
	I0731 15:07:28.872604    4930 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:07:28.872611    4930 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 15:07:28.872671    4930 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kubernetes-upgrade-410000/config.json ...
	I0731 15:07:28.873120    4930 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:07:28.873156    4930 start.go:364] duration metric: took 29.209µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I0731 15:07:28.873166    4930 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:07:28.873171    4930 fix.go:54] fixHost starting: 
	I0731 15:07:28.873290    4930 fix.go:112] recreateIfNeeded on kubernetes-upgrade-410000: state=Stopped err=<nil>
	W0731 15:07:28.873299    4930 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:07:28.881522    4930 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	I0731 15:07:28.885543    4930 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:07:28.885583    4930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:8f:ba:58:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:28.887791    4930 main.go:141] libmachine: STDOUT: 
	I0731 15:07:28.887814    4930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:07:28.887846    4930 fix.go:56] duration metric: took 14.676ms for fixHost
	I0731 15:07:28.887851    4930 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 14.690416ms
	W0731 15:07:28.887860    4930 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:07:28.887892    4930 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:07:28.887897    4930 start.go:729] Will try again in 5 seconds ...
	I0731 15:07:33.889997    4930 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:07:33.890545    4930 start.go:364] duration metric: took 381.916µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I0731 15:07:33.890702    4930 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:07:33.890720    4930 fix.go:54] fixHost starting: 
	I0731 15:07:33.891276    4930 fix.go:112] recreateIfNeeded on kubernetes-upgrade-410000: state=Stopped err=<nil>
	W0731 15:07:33.891297    4930 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:07:33.899722    4930 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	I0731 15:07:33.903754    4930 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:07:33.904005    4930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:8f:ba:58:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I0731 15:07:33.911204    4930 main.go:141] libmachine: STDOUT: 
	I0731 15:07:33.911245    4930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:07:33.911323    4930 fix.go:56] duration metric: took 20.605041ms for fixHost
	I0731 15:07:33.911338    4930 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 20.763625ms
	W0731 15:07:33.911482    4930 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:07:33.918776    4930 out.go:177] 
	W0731 15:07:33.922768    4930 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:07:33.922793    4930 out.go:239] * 
	* 
	W0731 15:07:33.923975    4930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:07:33.934753    4930 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-410000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-410000 version --output=json: exit status 1 (45.397834ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-410000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-31 15:07:33.991103 -0700 PDT m=+2485.432739584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-410000 -n kubernetes-upgrade-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-410000 -n kubernetes-upgrade-410000: exit status 7 (30.415042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-410000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-410000
--- FAIL: TestKubernetesUpgrade (17.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current65164559/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1257599418/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (587.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2151075956 start -p stopped-upgrade-609000 --memory=2200 --vm-driver=qemu2 
E0731 15:08:21.367386    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2151075956 start -p stopped-upgrade-609000 --memory=2200 --vm-driver=qemu2 : (52.422555291s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2151075956 -p stopped-upgrade-609000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2151075956 -p stopped-upgrade-609000 stop: (12.121059042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-609000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0731 15:10:18.288308    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 15:12:25.956691    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-609000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.448769292s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-609000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-609000" primary control-plane node in "stopped-upgrade-609000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-609000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:08:39.650157    4988 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:08:39.650340    4988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:08:39.650344    4988 out.go:304] Setting ErrFile to fd 2...
	I0731 15:08:39.650347    4988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:08:39.650507    4988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:08:39.651620    4988 out.go:298] Setting JSON to false
	I0731 15:08:39.670714    4988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4083,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:08:39.670793    4988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:08:39.675607    4988 out.go:177] * [stopped-upgrade-609000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:08:39.681569    4988 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:08:39.681627    4988 notify.go:220] Checking for updates...
	I0731 15:08:39.687466    4988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:08:39.690519    4988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:08:39.691770    4988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:08:39.694567    4988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:08:39.697556    4988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:08:39.700787    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:08:39.703419    4988 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 15:08:39.706519    4988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:08:39.710504    4988 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:08:39.717551    4988 start.go:297] selected driver: qemu2
	I0731 15:08:39.717559    4988 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:08:39.717641    4988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:08:39.720408    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:08:39.720422    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:08:39.720442    4988 start.go:340] cluster config:
	{Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:08:39.720494    4988 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:08:39.728476    4988 out.go:177] * Starting "stopped-upgrade-609000" primary control-plane node in "stopped-upgrade-609000" cluster
	I0731 15:08:39.732528    4988 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:08:39.732544    4988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 15:08:39.732555    4988 cache.go:56] Caching tarball of preloaded images
	I0731 15:08:39.732606    4988 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:08:39.732613    4988 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 15:08:39.732668    4988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/config.json ...
	I0731 15:08:39.733060    4988 start.go:360] acquireMachinesLock for stopped-upgrade-609000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:08:39.733092    4988 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "stopped-upgrade-609000"
	I0731 15:08:39.733101    4988 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:08:39.733105    4988 fix.go:54] fixHost starting: 
	I0731 15:08:39.733204    4988 fix.go:112] recreateIfNeeded on stopped-upgrade-609000: state=Stopped err=<nil>
	W0731 15:08:39.733214    4988 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:08:39.741531    4988 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-609000" ...
	I0731 15:08:39.745445    4988 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:08:39.745503    4988 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-609000 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/disk.qcow2
	I0731 15:08:39.790435    4988 main.go:141] libmachine: STDOUT: 
	I0731 15:08:39.790464    4988 main.go:141] libmachine: STDERR: 
	I0731 15:08:39.790472    4988 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0731 15:09:00.178062    4988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/config.json ...
	I0731 15:09:00.178895    4988 machine.go:94] provisionDockerMachine start ...
	I0731 15:09:00.179074    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.179617    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.179633    4988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 15:09:00.272466    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 15:09:00.272494    4988 buildroot.go:166] provisioning hostname "stopped-upgrade-609000"
	I0731 15:09:00.272605    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.272814    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.272826    4988 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-609000 && echo "stopped-upgrade-609000" | sudo tee /etc/hostname
	I0731 15:09:00.354491    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-609000
	
	I0731 15:09:00.354591    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.354758    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.354769    4988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-609000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-609000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-609000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 15:09:00.429297    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 15:09:00.429311    4988 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19312-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19312-1411/.minikube}
	I0731 15:09:00.429319    4988 buildroot.go:174] setting up certificates
	I0731 15:09:00.429326    4988 provision.go:84] configureAuth start
	I0731 15:09:00.429330    4988 provision.go:143] copyHostCerts
	I0731 15:09:00.429403    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem, removing ...
	I0731 15:09:00.429413    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem
	I0731 15:09:00.429565    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.pem (1078 bytes)
	I0731 15:09:00.429781    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem, removing ...
	I0731 15:09:00.429786    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem
	I0731 15:09:00.429846    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/cert.pem (1123 bytes)
	I0731 15:09:00.429973    4988 exec_runner.go:144] found /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem, removing ...
	I0731 15:09:00.429977    4988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem
	I0731 15:09:00.430030    4988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19312-1411/.minikube/key.pem (1679 bytes)
	I0731 15:09:00.430137    4988 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-609000 san=[127.0.0.1 localhost minikube stopped-upgrade-609000]
	I0731 15:09:00.511618    4988 provision.go:177] copyRemoteCerts
	I0731 15:09:00.511656    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 15:09:00.511664    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:00.549114    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 15:09:00.556880    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 15:09:00.564586    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 15:09:00.571290    4988 provision.go:87] duration metric: took 141.961916ms to configureAuth
	I0731 15:09:00.571299    4988 buildroot.go:189] setting minikube options for container-runtime
	I0731 15:09:00.571420    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:09:00.571453    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.571545    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.571550    4988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 15:09:00.639415    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 15:09:00.639424    4988 buildroot.go:70] root file system type: tmpfs
	I0731 15:09:00.639477    4988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 15:09:00.639520    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.639634    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.639669    4988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 15:09:00.711376    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 15:09:00.711451    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:00.711578    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:00.711589    4988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 15:09:01.052512    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 15:09:01.052524    4988 machine.go:97] duration metric: took 873.632375ms to provisionDockerMachine
	I0731 15:09:01.052532    4988 start.go:293] postStartSetup for "stopped-upgrade-609000" (driver="qemu2")
	I0731 15:09:01.052539    4988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 15:09:01.052589    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 15:09:01.052599    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:01.089543    4988 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 15:09:01.090846    4988 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 15:09:01.090854    4988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/addons for local assets ...
	I0731 15:09:01.090931    4988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19312-1411/.minikube/files for local assets ...
	I0731 15:09:01.091031    4988 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem -> 19132.pem in /etc/ssl/certs
	I0731 15:09:01.091130    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 15:09:01.093698    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:09:01.100995    4988 start.go:296] duration metric: took 48.458375ms for postStartSetup
	I0731 15:09:01.101011    4988 fix.go:56] duration metric: took 21.368247958s for fixHost
	I0731 15:09:01.101044    4988 main.go:141] libmachine: Using SSH client type: native
	I0731 15:09:01.101149    4988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005baa10] 0x1005bd270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0731 15:09:01.101153    4988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 15:09:01.167199    4988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463741.607963254
	
	I0731 15:09:01.167207    4988 fix.go:216] guest clock: 1722463741.607963254
	I0731 15:09:01.167212    4988 fix.go:229] Guest: 2024-07-31 15:09:01.607963254 -0700 PDT Remote: 2024-07-31 15:09:01.101012 -0700 PDT m=+21.482586667 (delta=506.951254ms)
	I0731 15:09:01.167224    4988 fix.go:200] guest clock delta is within tolerance: 506.951254ms
	I0731 15:09:01.167227    4988 start.go:83] releasing machines lock for "stopped-upgrade-609000", held for 21.434474042s
	I0731 15:09:01.167301    4988 ssh_runner.go:195] Run: cat /version.json
	I0731 15:09:01.167311    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:09:01.167301    4988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 15:09:01.167339    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	W0731 15:09:01.168043    4988 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0731 15:09:01.168066    4988 retry.go:31] will retry after 260.730151ms: dial tcp [::1]:50463: connect: connection refused
	W0731 15:09:01.479854    4988 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 15:09:01.480035    4988 ssh_runner.go:195] Run: systemctl --version
	I0731 15:09:01.483420    4988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 15:09:01.486168    4988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 15:09:01.486215    4988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 15:09:01.490946    4988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 15:09:01.497950    4988 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 15:09:01.497969    4988 start.go:495] detecting cgroup driver to use...
	I0731 15:09:01.498085    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:09:01.507664    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 15:09:01.511332    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 15:09:01.514725    4988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 15:09:01.514756    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 15:09:01.518125    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:09:01.521596    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 15:09:01.525026    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 15:09:01.528150    4988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 15:09:01.530842    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 15:09:01.533756    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 15:09:01.537003    4988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 15:09:01.540110    4988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 15:09:01.542625    4988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 15:09:01.545531    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:01.610786    4988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 15:09:01.620879    4988 start.go:495] detecting cgroup driver to use...
	I0731 15:09:01.620941    4988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 15:09:01.626013    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:09:01.635163    4988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 15:09:01.641194    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 15:09:01.645682    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 15:09:01.650224    4988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 15:09:01.715118    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 15:09:01.720719    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 15:09:01.726363    4988 ssh_runner.go:195] Run: which cri-dockerd
	I0731 15:09:01.727820    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 15:09:01.730305    4988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 15:09:01.735462    4988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 15:09:01.802564    4988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 15:09:01.865030    4988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 15:09:01.865099    4988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 15:09:01.870462    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:01.937324    4988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:09:03.075521    4988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.138194917s)
	I0731 15:09:03.075577    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 15:09:03.080664    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:09:03.085053    4988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 15:09:03.148833    4988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 15:09:03.208990    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:03.268681    4988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 15:09:03.274845    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 15:09:03.279283    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:03.341440    4988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 15:09:03.381745    4988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 15:09:03.381867    4988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 15:09:03.384464    4988 start.go:563] Will wait 60s for crictl version
	I0731 15:09:03.384510    4988 ssh_runner.go:195] Run: which crictl
	I0731 15:09:03.386549    4988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 15:09:03.401266    4988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 15:09:03.401329    4988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:09:03.417584    4988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 15:09:03.438542    4988 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 15:09:03.438603    4988 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 15:09:03.439899    4988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 15:09:03.443552    4988 kubeadm.go:883] updating cluster {Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 15:09:03.443595    4988 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 15:09:03.443634    4988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:09:03.453732    4988 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:09:03.453739    4988 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:09:03.453783    4988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:09:03.457169    4988 ssh_runner.go:195] Run: which lz4
	I0731 15:09:03.458428    4988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 15:09:03.459778    4988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 15:09:03.459788    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 15:09:04.411896    4988 docker.go:649] duration metric: took 953.512666ms to copy over tarball
	I0731 15:09:04.411962    4988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 15:09:05.597889    4988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185932875s)
	I0731 15:09:05.597903    4988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 15:09:05.614171    4988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 15:09:05.617631    4988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 15:09:05.622812    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:05.685025    4988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 15:09:07.313893    4988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.628878833s)
	I0731 15:09:07.313981    4988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 15:09:07.327802    4988 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 15:09:07.327812    4988 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 15:09:07.327818    4988 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 15:09:07.333227    4988 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.335296    4988 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.336881    4988 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.336882    4988 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.338453    4988 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.338471    4988 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.339943    4988 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.340133    4988 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.341323    4988 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.341371    4988 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.342330    4988 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 15:09:07.342516    4988 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.343749    4988 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.343854    4988 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.344783    4988 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 15:09:07.345349    4988 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.771686    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.773786    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.788545    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.790445    4988 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 15:09:07.790467    4988 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.790504    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 15:09:07.793363    4988 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 15:09:07.793384    4988 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.793431    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 15:09:07.802205    4988 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 15:09:07.802230    4988 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.802291    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 15:09:07.806412    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 15:09:07.810408    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 15:09:07.815196    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 15:09:07.816481    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 15:09:07.824748    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.827412    4988 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 15:09:07.827430    4988 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 15:09:07.827465    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 15:09:07.839161    4988 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 15:09:07.839174    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 15:09:07.839181    4988 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.839232    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 15:09:07.839280    4988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 15:09:07.841166    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.851785    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 15:09:07.851892    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 15:09:07.851911    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 15:09:07.852186    4988 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 15:09:07.852199    4988 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.852237    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 15:09:07.860240    4988 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 15:09:07.860253    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 15:09:07.865841    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 15:09:07.865955    4988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0731 15:09:07.866200    4988 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 15:09:07.866300    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.892774    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 15:09:07.892815    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 15:09:07.892840    4988 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 15:09:07.892840    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 15:09:07.892857    4988 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.892903    4988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 15:09:07.906674    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 15:09:07.906790    4988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:09:07.908277    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 15:09:07.908290    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0731 15:09:07.931448    4988 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 15:09:07.931560    4988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.963756    4988 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 15:09:07.963785    4988 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.963850    4988 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:09:07.996054    4988 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 15:09:07.996076    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 15:09:08.004657    4988 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 15:09:08.004782    4988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:09:08.113773    4988 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 15:09:08.113780    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 15:09:08.113804    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 15:09:08.184126    4988 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 15:09:08.184140    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 15:09:08.513344    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 15:09:08.513365    4988 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 15:09:08.513373    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 15:09:08.663105    4988 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 15:09:08.663151    4988 cache_images.go:92] duration metric: took 1.335348792s to LoadCachedImages
	W0731 15:09:08.663202    4988 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0731 15:09:08.663207    4988 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 15:09:08.663264    4988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-609000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 15:09:08.663335    4988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 15:09:08.676668    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:09:08.676681    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:09:08.676687    4988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 15:09:08.676694    4988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-609000 NodeName:stopped-upgrade-609000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 15:09:08.676760    4988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-609000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 15:09:08.676817    4988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 15:09:08.680016    4988 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 15:09:08.680050    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 15:09:08.682476    4988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 15:09:08.687204    4988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 15:09:08.691738    4988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 15:09:08.696737    4988 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 15:09:08.697966    4988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 15:09:08.701735    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:09:08.766689    4988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:09:08.776414    4988 certs.go:68] Setting up /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000 for IP: 10.0.2.15
	I0731 15:09:08.776425    4988 certs.go:194] generating shared ca certs ...
	I0731 15:09:08.776435    4988 certs.go:226] acquiring lock for ca certs: {Name:mk0bfd7451d2ce366c95ee7ce2af2fa5265e7335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.776608    4988 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key
	I0731 15:09:08.776647    4988 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key
	I0731 15:09:08.776653    4988 certs.go:256] generating profile certs ...
	I0731 15:09:08.776711    4988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key
	I0731 15:09:08.776732    4988 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf
	I0731 15:09:08.776743    4988 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 15:09:08.835581    4988 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf ...
	I0731 15:09:08.835597    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf: {Name:mkfdb7af116406fb5ca43546504716c0cea15846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.836504    4988 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf ...
	I0731 15:09:08.836509    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf: {Name:mk2c7609c59a21189518168a8dd8ebaba6a7ef28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.836677    4988 certs.go:381] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt.665e6fcf -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt
	I0731 15:09:08.836808    4988 certs.go:385] copying /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key.665e6fcf -> /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key
	I0731 15:09:08.836936    4988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.key
	I0731 15:09:08.837062    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem (1338 bytes)
	W0731 15:09:08.837091    4988 certs.go:480] ignoring /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913_empty.pem, impossibly tiny 0 bytes
	I0731 15:09:08.837095    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 15:09:08.837114    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem (1078 bytes)
	I0731 15:09:08.837132    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem (1123 bytes)
	I0731 15:09:08.837152    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/key.pem (1679 bytes)
	I0731 15:09:08.837191    4988 certs.go:484] found cert: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem (1708 bytes)
	I0731 15:09:08.837556    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 15:09:08.845031    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 15:09:08.851567    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 15:09:08.858140    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 15:09:08.865395    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 15:09:08.872584    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 15:09:08.879591    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 15:09:08.886059    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 15:09:08.893194    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 15:09:08.900054    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/1913.pem --> /usr/share/ca-certificates/1913.pem (1338 bytes)
	I0731 15:09:08.906493    4988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/ssl/certs/19132.pem --> /usr/share/ca-certificates/19132.pem (1708 bytes)
	I0731 15:09:08.913477    4988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 15:09:08.918778    4988 ssh_runner.go:195] Run: openssl version
	I0731 15:09:08.920688    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 15:09:08.923588    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.924950    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:27 /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.924975    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 15:09:08.926938    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 15:09:08.930025    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1913.pem && ln -fs /usr/share/ca-certificates/1913.pem /etc/ssl/certs/1913.pem"
	I0731 15:09:08.933396    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.934927    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 21:34 /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.934950    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1913.pem
	I0731 15:09:08.936727    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1913.pem /etc/ssl/certs/51391683.0"
	I0731 15:09:08.939654    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19132.pem && ln -fs /usr/share/ca-certificates/19132.pem /etc/ssl/certs/19132.pem"
	I0731 15:09:08.942489    4988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.943961    4988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 21:34 /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.943979    4988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19132.pem
	I0731 15:09:08.945762    4988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 15:09:08.949568    4988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 15:09:08.951092    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 15:09:08.954771    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 15:09:08.956473    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 15:09:08.958346    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 15:09:08.960206    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 15:09:08.962032    4988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 15:09:08.963929    4988 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-609000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 15:09:08.963994    4988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:09:08.974001    4988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 15:09:08.977219    4988 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 15:09:08.977226    4988 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 15:09:08.977249    4988 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 15:09:08.980118    4988 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 15:09:08.980387    4988 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-609000" does not appear in /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:09:08.980484    4988 kubeconfig.go:62] /Users/jenkins/minikube-integration/19312-1411/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-609000" cluster setting kubeconfig missing "stopped-upgrade-609000" context setting]
	I0731 15:09:08.980685    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:09:08.982742    4988 kapi.go:59] client config for stopped-upgrade-609000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101950700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:09:08.983036    4988 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 15:09:08.985682    4988 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-609000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 15:09:08.985689    4988 kubeadm.go:1160] stopping kube-system containers ...
	I0731 15:09:08.985729    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 15:09:08.996301    4988 docker.go:483] Stopping containers: [8bb0ebee54c4 e30a2b1ee885 6ae13f04c4cd fe739fbe2f95 c66065d4d5ac a278d566ee4c 3d36dc6afdf3 b869ebda42e1 738225ad0b68]
	I0731 15:09:08.996360    4988 ssh_runner.go:195] Run: docker stop 8bb0ebee54c4 e30a2b1ee885 6ae13f04c4cd fe739fbe2f95 c66065d4d5ac a278d566ee4c 3d36dc6afdf3 b869ebda42e1 738225ad0b68
	I0731 15:09:09.011269    4988 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 15:09:09.016737    4988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:09:09.019553    4988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:09:09.019559    4988 kubeadm.go:157] found existing configuration files:
	
	I0731 15:09:09.019580    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0731 15:09:09.021958    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:09:09.021980    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:09:09.024930    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0731 15:09:09.027536    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:09:09.027557    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:09:09.029942    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0731 15:09:09.032982    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:09:09.033007    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:09:09.035528    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0731 15:09:09.037788    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:09:09.037807    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:09:09.040758    4988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:09:09.043656    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.066093    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.528979    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.642206    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.671681    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 15:09:09.708622    4988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:09:09.708704    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.210898    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.710807    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:09:10.715440    4988 api_server.go:72] duration metric: took 1.006836375s to wait for apiserver process to appear ...
	I0731 15:09:10.715453    4988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:09:10.715462    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:15.717540    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:15.717586    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:20.717882    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:20.717905    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:25.718245    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:25.718313    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:30.719048    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:30.719092    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:35.719738    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:35.719758    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:40.720569    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:40.720631    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:45.721880    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:45.721924    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:50.723499    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:50.723542    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:09:55.725469    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:09:55.725499    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:00.727646    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:00.727693    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:05.729930    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:05.729985    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:10.732365    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:10.732667    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:10.759871    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:10.759986    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:10.776462    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:10.776554    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:10.789959    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:10.790033    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:10.801807    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:10.801876    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:10.815898    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:10.815961    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:10.826298    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:10.826361    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:10.836620    4988 logs.go:276] 0 containers: []
	W0731 15:10:10.836632    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:10.836687    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:10.850864    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:10.850880    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:10.850887    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:10.855552    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:10.855559    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:10.869482    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:10.869492    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:10.899863    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:10.899874    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:10.911884    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:10.911895    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:10.925828    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:10.925839    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:10.940554    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:10.940565    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:10.951815    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:10.951826    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:10.963065    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:10.963077    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:11.000615    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:11.000625    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:11.105101    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:11.105115    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:11.116919    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:11.116940    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:11.129018    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:11.129030    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:11.174096    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:11.174108    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:11.192236    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:11.192247    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:11.206252    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:11.206263    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:13.730748    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:18.732114    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:18.732267    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:18.749921    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:18.750005    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:18.763944    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:18.764026    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:18.774862    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:18.774933    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:18.785459    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:18.785534    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:18.795715    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:18.795786    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:18.806283    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:18.806347    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:18.819134    4988 logs.go:276] 0 containers: []
	W0731 15:10:18.819144    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:18.819206    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:18.828632    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:18.828648    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:18.828653    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:18.867549    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:18.867561    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:18.880545    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:18.880560    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:18.894409    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:18.894421    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:18.905873    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:18.905884    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:18.931523    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:18.931532    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:18.970008    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:18.970020    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:18.974480    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:18.974491    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:18.988385    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:18.988395    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:19.022115    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:19.022130    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:19.033906    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:19.033918    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:19.045944    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:19.045956    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:19.084509    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:19.084520    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:19.099030    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:19.099045    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:19.114387    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:19.114398    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:19.126214    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:19.126226    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:21.647087    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:26.649327    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:26.649535    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:26.662865    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:26.662941    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:26.673904    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:26.673981    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:26.684825    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:26.684902    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:26.695131    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:26.695207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:26.705192    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:26.705255    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:26.715216    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:26.715281    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:26.725503    4988 logs.go:276] 0 containers: []
	W0731 15:10:26.725518    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:26.725573    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:26.735925    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:26.735942    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:26.735947    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:26.747830    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:26.747840    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:26.767471    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:26.767482    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:26.806513    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:26.806523    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:26.818337    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:26.818347    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:26.839723    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:26.839738    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:26.864106    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:26.864116    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:26.876368    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:26.876380    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:26.888723    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:26.888733    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:26.906393    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:26.906404    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:26.921065    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:26.921075    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:26.960420    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:26.960430    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:26.965459    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:26.965465    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:27.001817    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:27.001827    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:27.016580    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:27.016590    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:27.031985    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:27.031995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:29.543453    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:34.545737    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:34.545921    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:34.565001    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:34.565095    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:34.579891    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:34.579966    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:34.591591    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:34.591658    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:34.602513    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:34.602581    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:34.613124    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:34.613192    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:34.624227    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:34.624296    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:34.634271    4988 logs.go:276] 0 containers: []
	W0731 15:10:34.634283    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:34.634335    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:34.644334    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:34.644349    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:34.644354    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:34.666405    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:34.666415    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:34.677706    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:34.677716    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:34.692715    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:34.692726    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:34.704704    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:34.704718    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:34.742451    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:34.742466    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:34.754135    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:34.754147    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:34.771214    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:34.771223    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:34.775276    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:34.775282    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:34.811888    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:34.811903    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:34.824041    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:34.824051    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:34.837659    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:34.837670    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:34.861742    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:34.861755    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:34.898119    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:34.898133    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:34.911842    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:34.911853    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:34.932471    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:34.932494    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:37.446499    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:42.448843    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:42.449200    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:42.479141    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:42.479266    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:42.498103    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:42.498197    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:42.512771    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:42.512853    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:42.525194    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:42.525271    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:42.535966    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:42.536037    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:42.546592    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:42.546665    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:42.561735    4988 logs.go:276] 0 containers: []
	W0731 15:10:42.561746    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:42.561813    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:42.572268    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:42.572286    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:42.572292    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:42.584613    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:42.584624    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:42.588984    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:42.588990    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:42.602725    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:42.602735    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:42.614026    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:42.614042    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:42.636154    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:42.636163    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:42.659364    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:42.659371    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:42.695372    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:42.695380    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:42.730251    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:42.730263    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:42.744622    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:42.744636    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:42.761883    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:42.761894    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:42.776133    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:42.776145    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:42.787416    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:42.787429    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:42.799476    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:42.799489    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:42.814066    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:42.814077    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:42.852957    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:42.852977    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:45.366881    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:50.369073    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:50.369319    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:50.393787    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:50.393907    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:50.412688    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:50.412781    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:50.424490    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:50.424555    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:50.435418    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:50.435488    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:50.445696    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:50.445769    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:50.456023    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:50.456086    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:50.466132    4988 logs.go:276] 0 containers: []
	W0731 15:10:50.466144    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:50.466207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:50.477714    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:50.477736    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:50.477742    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:50.516940    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:50.516959    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:50.556647    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:50.556663    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:50.571191    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:50.571203    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:50.582854    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:50.582869    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:10:50.607172    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:50.607178    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:50.620750    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:50.620761    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:50.631903    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:50.631913    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:50.635959    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:50.635965    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:50.670982    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:50.670992    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:50.685034    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:50.685049    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:50.702999    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:50.703009    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:50.718961    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:50.718972    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:50.730072    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:50.730086    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:50.742801    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:50.742810    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:50.764893    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:50.764905    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:53.279091    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:10:58.281420    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:10:58.281766    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:10:58.312867    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:10:58.313003    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:10:58.332055    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:10:58.332155    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:10:58.346109    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:10:58.346194    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:10:58.360450    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:10:58.360521    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:10:58.371185    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:10:58.371258    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:10:58.385859    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:10:58.385923    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:10:58.396393    4988 logs.go:276] 0 containers: []
	W0731 15:10:58.396407    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:10:58.396463    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:10:58.406829    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:10:58.406847    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:10:58.406854    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:10:58.429532    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:10:58.429543    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:10:58.441313    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:10:58.441324    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:10:58.480285    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:10:58.480300    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:10:58.494993    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:10:58.495007    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:10:58.509985    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:10:58.509995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:10:58.521567    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:10:58.521580    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:10:58.563540    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:10:58.563551    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:10:58.574703    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:10:58.574714    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:10:58.599499    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:10:58.599512    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:10:58.617163    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:10:58.617174    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:10:58.630523    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:10:58.630537    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:10:58.642546    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:10:58.642556    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:10:58.679372    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:10:58.679381    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:10:58.683269    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:10:58.683278    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:10:58.695736    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:10:58.695746    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:01.220227    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:06.222068    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:06.222460    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:06.250862    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:06.250994    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:06.269489    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:06.269578    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:06.282640    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:06.282715    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:06.294528    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:06.294598    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:06.304885    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:06.304957    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:06.318258    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:06.318326    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:06.329005    4988 logs.go:276] 0 containers: []
	W0731 15:11:06.329020    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:06.329085    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:06.339223    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:06.339240    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:06.339246    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:06.351120    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:06.351134    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:06.368802    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:06.368823    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:06.380178    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:06.380190    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:06.418197    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:06.418206    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:06.432691    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:06.432703    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:06.456755    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:06.456763    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:06.468673    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:06.468685    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:06.505458    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:06.505468    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:06.543595    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:06.543606    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:06.555769    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:06.555784    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:06.567121    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:06.567132    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:06.581105    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:06.581116    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:06.595128    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:06.595140    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:06.608639    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:06.608649    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:06.629918    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:06.629931    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:09.136311    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:14.138680    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:14.138891    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:14.160241    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:14.160340    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:14.175500    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:14.175575    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:14.188509    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:14.188591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:14.199148    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:14.199218    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:14.209245    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:14.209309    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:14.220661    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:14.220735    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:14.231324    4988 logs.go:276] 0 containers: []
	W0731 15:11:14.231335    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:14.231393    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:14.241957    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:14.241976    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:14.241982    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:14.256324    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:14.256338    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:14.273127    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:14.273136    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:14.296294    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:14.296303    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:14.308450    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:14.308463    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:14.322719    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:14.322731    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:14.367630    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:14.367643    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:14.384318    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:14.384334    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:14.401465    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:14.401478    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:14.406188    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:14.406194    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:14.418294    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:14.418305    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:14.439504    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:14.439516    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:14.452570    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:14.452581    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:14.492166    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:14.492176    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:14.510486    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:14.510502    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:14.523491    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:14.523500    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:17.064113    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:22.066415    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:22.066625    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:22.081524    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:22.081612    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:22.093732    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:22.093802    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:22.104130    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:22.104199    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:22.114679    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:22.114753    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:22.125432    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:22.125506    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:22.135521    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:22.135591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:22.145376    4988 logs.go:276] 0 containers: []
	W0731 15:11:22.145387    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:22.145449    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:22.155684    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:22.155700    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:22.155708    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:22.167095    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:22.167106    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:22.188646    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:22.188661    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:22.211842    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:22.211852    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:22.245765    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:22.245779    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:22.259605    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:22.259615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:22.299493    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:22.299504    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:22.314332    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:22.314342    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:22.329025    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:22.329035    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:22.341601    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:22.341613    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:22.379807    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:22.379819    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:22.400534    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:22.400544    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:22.411870    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:22.411885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:22.428942    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:22.428953    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:22.442453    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:22.442466    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:22.446899    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:22.446907    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:24.959923    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:29.962104    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:29.962435    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:29.976696    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:29.976769    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:29.987602    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:29.987673    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:29.998247    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:29.998323    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:30.009077    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:30.009149    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:30.018936    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:30.019024    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:30.029933    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:30.030001    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:30.040279    4988 logs.go:276] 0 containers: []
	W0731 15:11:30.040290    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:30.040345    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:30.050304    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:30.050321    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:30.050327    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:30.061923    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:30.061933    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:30.076445    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:30.076456    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:30.080394    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:30.080403    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:30.106513    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:30.106522    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:30.121002    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:30.121014    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:30.132996    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:30.133005    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:30.145343    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:30.145354    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:30.159190    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:30.159201    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:30.196819    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:30.196830    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:30.219781    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:30.219790    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:30.242859    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:30.242872    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:30.261867    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:30.261877    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:30.300286    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:30.300294    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:30.358973    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:30.358985    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:30.373515    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:30.373527    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:32.893118    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:37.895338    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:37.895481    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:37.913063    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:37.913147    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:37.924263    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:37.924338    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:37.934770    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:37.934835    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:37.945474    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:37.945544    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:37.956120    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:37.956192    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:37.966965    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:37.967035    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:37.977480    4988 logs.go:276] 0 containers: []
	W0731 15:11:37.977490    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:37.977550    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:37.987859    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:37.987879    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:37.987885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:38.002243    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:38.002256    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:38.019264    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:38.019274    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:38.033547    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:38.033558    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:38.038433    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:38.038440    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:38.052831    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:38.052842    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:38.074282    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:38.074295    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:38.085938    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:38.085950    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:38.097402    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:38.097415    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:38.131874    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:38.131885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:38.170731    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:38.170742    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:38.182085    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:38.182096    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:38.194375    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:38.194387    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:38.219262    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:38.219275    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:38.259802    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:38.259829    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:38.275837    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:38.275846    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:40.790504    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:45.792731    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:45.792862    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:45.807324    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:45.807405    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:45.819882    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:45.819952    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:45.830252    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:45.830317    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:45.841157    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:45.841229    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:45.851993    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:45.852061    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:45.862649    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:45.862716    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:45.873003    4988 logs.go:276] 0 containers: []
	W0731 15:11:45.873015    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:45.873065    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:45.885923    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:45.885940    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:45.885946    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:45.922688    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:45.922697    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:45.944555    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:45.944565    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:45.969607    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:45.969619    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:45.984052    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:45.984066    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:45.988352    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:45.988359    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:46.025700    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:46.025711    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:46.046288    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:46.046305    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:46.059384    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:46.059397    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:46.078528    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:46.078551    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:46.117942    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:46.117960    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:46.131097    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:46.131108    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:46.146738    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:46.146746    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:46.161529    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:46.161538    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:46.173938    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:46.173949    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:46.188664    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:46.188676    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:48.703386    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:11:53.705756    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:11:53.705971    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:11:53.723631    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:11:53.723719    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:11:53.737157    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:11:53.737230    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:11:53.750464    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:11:53.750540    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:11:53.761581    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:11:53.761655    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:11:53.771774    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:11:53.771843    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:11:53.784782    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:11:53.784856    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:11:53.795238    4988 logs.go:276] 0 containers: []
	W0731 15:11:53.795251    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:11:53.795307    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:11:53.805653    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:11:53.805671    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:11:53.805677    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:11:53.822788    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:11:53.822801    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:11:53.838190    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:11:53.838202    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:11:53.878262    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:11:53.878279    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:11:53.893115    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:11:53.893126    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:11:53.915357    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:11:53.915371    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:11:53.928356    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:11:53.928372    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:11:53.949839    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:11:53.949857    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:11:53.964763    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:11:53.964772    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:11:53.976900    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:11:53.976909    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:11:54.001635    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:11:54.001649    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:11:54.038921    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:11:54.038934    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:11:54.053842    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:11:54.053855    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:11:54.067204    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:11:54.067216    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:11:54.071988    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:11:54.072000    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:11:54.122605    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:11:54.122615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:11:56.637506    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:01.639819    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:01.640090    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:01.667647    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:01.667776    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:01.685252    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:01.685332    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:01.698718    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:01.698788    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:01.713888    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:01.713960    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:01.729226    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:01.729302    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:01.739637    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:01.739699    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:01.751155    4988 logs.go:276] 0 containers: []
	W0731 15:12:01.751166    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:01.751221    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:01.761382    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:01.761398    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:01.761403    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:01.785769    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:01.785780    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:01.826843    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:01.826858    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:01.841982    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:01.841990    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:01.865742    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:01.865755    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:01.891204    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:01.891214    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:01.903897    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:01.903908    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:01.919307    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:01.919320    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:01.931630    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:01.931642    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:01.944115    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:01.944128    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:01.963324    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:01.963338    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:01.967999    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:01.968011    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:02.011794    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:02.011808    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:02.054048    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:02.054067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:02.073232    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:02.073245    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:02.095665    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:02.095675    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:04.609651    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:09.612192    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:09.612470    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:09.637474    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:09.637591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:09.654083    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:09.654167    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:09.667804    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:09.667888    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:09.680476    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:09.680556    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:09.700853    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:09.700921    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:09.713225    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:09.713303    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:09.731618    4988 logs.go:276] 0 containers: []
	W0731 15:12:09.731627    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:09.731678    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:09.742818    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:09.742834    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:09.742839    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:09.773053    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:09.773065    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:09.785229    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:09.785240    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:09.810410    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:09.810429    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:09.848512    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:09.848525    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:09.863593    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:09.863604    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:09.904337    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:09.904359    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:09.922148    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:09.922161    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:09.937571    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:09.937585    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:09.964607    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:09.964616    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:09.969145    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:09.969154    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:09.981720    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:09.981734    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:09.999232    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:09.999243    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:10.038567    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:10.038576    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:10.053168    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:10.053178    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:10.074226    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:10.074237    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:12.589488    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:17.591748    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:17.591830    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:17.607328    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:17.607376    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:17.619118    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:17.619177    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:17.631091    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:17.631160    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:17.644145    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:17.644223    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:17.655891    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:17.655964    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:17.670134    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:17.670207    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:17.681764    4988 logs.go:276] 0 containers: []
	W0731 15:12:17.681776    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:17.681832    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:17.694890    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:17.694909    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:17.694914    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:17.735209    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:17.735226    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:17.752056    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:17.752067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:17.767010    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:17.767028    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:17.779606    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:17.779620    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:17.819421    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:17.819437    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:17.837367    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:17.837379    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:17.851986    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:17.851997    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:17.864454    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:17.864467    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:17.888184    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:17.888198    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:17.910894    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:17.910908    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:17.928581    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:17.928596    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:17.945687    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:17.945702    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:17.960095    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:17.960109    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:17.964351    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:17.964357    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:18.001789    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:18.001803    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:20.518271    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:25.519003    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:25.519082    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:25.531075    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:25.531147    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:25.542421    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:25.542497    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:25.553756    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:25.553825    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:25.565043    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:25.565119    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:25.576169    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:25.576240    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:25.587519    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:25.587597    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:25.598774    4988 logs.go:276] 0 containers: []
	W0731 15:12:25.598786    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:25.598853    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:25.610280    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:25.610299    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:25.610306    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:25.629603    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:25.629620    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:25.652804    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:25.652815    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:25.666662    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:25.666673    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:25.681786    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:25.681801    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:25.686570    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:25.686580    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:25.730982    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:25.730995    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:25.749378    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:25.749387    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:25.761560    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:25.761571    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:25.784871    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:25.784879    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:25.796960    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:25.796974    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:25.810591    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:25.810601    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:25.848785    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:25.848797    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:25.866741    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:25.866752    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:25.880868    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:25.880879    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:25.917239    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:25.917247    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:28.430257    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:33.430886    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:33.430965    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:33.445051    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:33.445119    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:33.456550    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:33.456618    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:33.467680    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:33.467750    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:33.479314    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:33.479392    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:33.496428    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:33.496498    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:33.508855    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:33.508932    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:33.519783    4988 logs.go:276] 0 containers: []
	W0731 15:12:33.519794    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:33.519862    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:33.531020    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:33.531037    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:33.531043    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:33.552918    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:33.552930    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:33.566081    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:33.566094    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:33.612060    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:33.612072    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:33.651969    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:33.651982    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:33.666855    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:33.666867    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:33.677995    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:33.678009    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:33.699492    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:33.699505    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:33.711992    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:33.712008    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:33.725261    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:33.725274    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:33.737150    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:33.737163    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:33.774955    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:33.774965    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:33.796437    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:33.796448    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:33.808146    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:33.808157    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:33.812208    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:33.812216    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:33.833698    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:33.833705    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:36.349327    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:41.351440    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:41.351545    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:41.362961    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:41.363038    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:41.376187    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:41.376264    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:41.388188    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:41.388265    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:41.400233    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:41.400308    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:41.410948    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:41.411019    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:41.421751    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:41.421825    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:41.432348    4988 logs.go:276] 0 containers: []
	W0731 15:12:41.432360    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:41.432424    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:41.444978    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:41.444994    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:41.444999    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:41.449569    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:41.449582    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:41.462131    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:41.462145    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:41.478895    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:41.478906    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:41.519422    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:41.519432    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:41.556761    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:41.556773    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:41.568626    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:41.568638    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:41.580144    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:41.580155    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:41.597753    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:41.597764    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:41.611795    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:41.611805    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:41.634924    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:41.634932    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:41.650796    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:41.650809    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:41.686384    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:41.686394    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:41.700655    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:41.700664    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:41.715314    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:41.715327    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:41.740909    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:41.740921    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:44.255683    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:49.257871    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:49.257967    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:49.269351    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:49.269430    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:49.280418    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:49.280497    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:49.291520    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:49.291591    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:49.303324    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:49.303389    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:49.317895    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:49.317962    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:49.329770    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:49.329839    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:49.341525    4988 logs.go:276] 0 containers: []
	W0731 15:12:49.341537    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:49.341590    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:49.357433    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:49.357449    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:49.357456    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:49.383007    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:49.383019    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:49.398255    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:49.398267    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:49.435648    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:49.435662    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:49.474853    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:49.474865    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:49.496393    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:49.496408    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:49.514566    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:49.514577    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:49.525745    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:49.525757    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:49.539459    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:49.539469    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:49.551463    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:49.551474    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:12:49.565400    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:49.565411    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:49.588046    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:49.588056    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:49.599505    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:49.599516    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:49.603610    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:49.603615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:49.615312    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:49.615322    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:49.627641    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:49.627653    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:52.169020    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:12:57.171155    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 15:12:57.171248    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:12:57.183091    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:12:57.183179    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:12:57.195325    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:12:57.195404    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:12:57.210949    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:12:57.211024    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:12:57.226844    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:12:57.226924    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:12:57.247147    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:12:57.247230    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:12:57.266525    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:12:57.266606    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:12:57.278205    4988 logs.go:276] 0 containers: []
	W0731 15:12:57.278216    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:12:57.278295    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:12:57.290171    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:12:57.290186    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:12:57.290191    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:12:57.312543    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:12:57.312553    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:12:57.352324    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:12:57.352339    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:12:57.388862    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:12:57.388875    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:12:57.426062    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:12:57.426072    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:12:57.438570    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:12:57.438582    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:12:57.450271    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:12:57.450281    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:12:57.462217    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:12:57.462228    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:12:57.480030    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:12:57.480041    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:12:57.501607    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:12:57.501616    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:12:57.513748    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:12:57.513760    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:12:57.518353    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:12:57.518360    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:12:57.538963    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:12:57.538976    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:12:57.553535    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:12:57.553544    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:12:57.568246    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:12:57.568257    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:12:57.579891    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:12:57.579901    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:13:00.095379    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:05.097626    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:05.097729    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:13:05.116679    4988 logs.go:276] 2 containers: [072c2c031eb1 8bb0ebee54c4]
	I0731 15:13:05.116757    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:13:05.148163    4988 logs.go:276] 2 containers: [f6335319f7f7 fe739fbe2f95]
	I0731 15:13:05.148239    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:13:05.158757    4988 logs.go:276] 1 containers: [cf18dd58b00d]
	I0731 15:13:05.158831    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:13:05.169762    4988 logs.go:276] 2 containers: [64ddcf376840 c66065d4d5ac]
	I0731 15:13:05.169834    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:13:05.181217    4988 logs.go:276] 1 containers: [717215fb940e]
	I0731 15:13:05.181299    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:13:05.192593    4988 logs.go:276] 2 containers: [6533d1b9cbf0 6ae13f04c4cd]
	I0731 15:13:05.192669    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:13:05.203713    4988 logs.go:276] 0 containers: []
	W0731 15:13:05.203724    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:13:05.203789    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:13:05.216524    4988 logs.go:276] 1 containers: [8a0701247365]
	I0731 15:13:05.216546    4988 logs.go:123] Gathering logs for kube-scheduler [c66065d4d5ac] ...
	I0731 15:13:05.216552    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66065d4d5ac"
	I0731 15:13:05.239001    4988 logs.go:123] Gathering logs for storage-provisioner [8a0701247365] ...
	I0731 15:13:05.239015    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0701247365"
	I0731 15:13:05.253108    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:13:05.253119    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:13:05.265013    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:13:05.265024    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:13:05.301675    4988 logs.go:123] Gathering logs for kube-apiserver [8bb0ebee54c4] ...
	I0731 15:13:05.301683    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb0ebee54c4"
	I0731 15:13:05.338876    4988 logs.go:123] Gathering logs for kube-apiserver [072c2c031eb1] ...
	I0731 15:13:05.338887    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 072c2c031eb1"
	I0731 15:13:05.353001    4988 logs.go:123] Gathering logs for etcd [f6335319f7f7] ...
	I0731 15:13:05.353012    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6335319f7f7"
	I0731 15:13:05.367337    4988 logs.go:123] Gathering logs for kube-scheduler [64ddcf376840] ...
	I0731 15:13:05.367348    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64ddcf376840"
	I0731 15:13:05.390465    4988 logs.go:123] Gathering logs for kube-proxy [717215fb940e] ...
	I0731 15:13:05.390476    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 717215fb940e"
	I0731 15:13:05.401818    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:13:05.401830    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:13:05.423541    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:13:05.423550    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:13:05.427503    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:13:05.427513    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:13:05.465143    4988 logs.go:123] Gathering logs for kube-controller-manager [6533d1b9cbf0] ...
	I0731 15:13:05.465154    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6533d1b9cbf0"
	I0731 15:13:05.483625    4988 logs.go:123] Gathering logs for kube-controller-manager [6ae13f04c4cd] ...
	I0731 15:13:05.483635    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ae13f04c4cd"
	I0731 15:13:05.498691    4988 logs.go:123] Gathering logs for etcd [fe739fbe2f95] ...
	I0731 15:13:05.498701    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe739fbe2f95"
	I0731 15:13:05.513498    4988 logs.go:123] Gathering logs for coredns [cf18dd58b00d] ...
	I0731 15:13:05.513510    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf18dd58b00d"
	I0731 15:13:08.030500    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:13.032744    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:13.032794    4988 kubeadm.go:597] duration metric: took 4m4.059469458s to restartPrimaryControlPlane
	W0731 15:13:13.032897    4988 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 15:13:13.032913    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 15:13:14.062644    4988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.029736792s)
	I0731 15:13:14.062697    4988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 15:13:14.067899    4988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 15:13:14.070570    4988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 15:13:14.073503    4988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 15:13:14.073509    4988 kubeadm.go:157] found existing configuration files:
	
	I0731 15:13:14.073534    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0731 15:13:14.075929    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 15:13:14.075954    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 15:13:14.078697    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0731 15:13:14.081515    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 15:13:14.081541    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 15:13:14.083996    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0731 15:13:14.086527    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 15:13:14.086547    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 15:13:14.089701    4988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0731 15:13:14.092184    4988 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 15:13:14.092203    4988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 15:13:14.094872    4988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 15:13:14.110781    4988 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 15:13:14.110822    4988 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 15:13:14.158912    4988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 15:13:14.158997    4988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 15:13:14.159060    4988 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 15:13:14.208842    4988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 15:13:14.216980    4988 out.go:204]   - Generating certificates and keys ...
	I0731 15:13:14.217037    4988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 15:13:14.217090    4988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 15:13:14.217165    4988 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 15:13:14.217195    4988 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 15:13:14.217227    4988 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 15:13:14.217288    4988 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 15:13:14.217317    4988 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 15:13:14.217368    4988 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 15:13:14.217411    4988 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 15:13:14.217447    4988 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 15:13:14.217468    4988 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 15:13:14.217495    4988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 15:13:14.374424    4988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 15:13:14.479400    4988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 15:13:14.574679    4988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 15:13:14.694336    4988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 15:13:14.727013    4988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 15:13:14.727442    4988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 15:13:14.727475    4988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 15:13:14.796087    4988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 15:13:14.799836    4988 out.go:204]   - Booting up control plane ...
	I0731 15:13:14.799885    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 15:13:14.799931    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 15:13:14.799964    4988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 15:13:14.800004    4988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 15:13:14.800099    4988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 15:13:19.301378    4988 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503107 seconds
	I0731 15:13:19.301434    4988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 15:13:19.304807    4988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 15:13:19.822373    4988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 15:13:19.822769    4988 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-609000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 15:13:20.325573    4988 kubeadm.go:310] [bootstrap-token] Using token: 464iif.j93mcmeumustwbfb
	I0731 15:13:20.331600    4988 out.go:204]   - Configuring RBAC rules ...
	I0731 15:13:20.331663    4988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 15:13:20.331717    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 15:13:20.338308    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 15:13:20.339198    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 15:13:20.340097    4988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 15:13:20.341031    4988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 15:13:20.344438    4988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 15:13:20.518234    4988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 15:13:20.729786    4988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 15:13:20.730504    4988 kubeadm.go:310] 
	I0731 15:13:20.730533    4988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 15:13:20.730538    4988 kubeadm.go:310] 
	I0731 15:13:20.730575    4988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 15:13:20.730580    4988 kubeadm.go:310] 
	I0731 15:13:20.730592    4988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 15:13:20.730625    4988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 15:13:20.730656    4988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 15:13:20.730659    4988 kubeadm.go:310] 
	I0731 15:13:20.730688    4988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 15:13:20.730709    4988 kubeadm.go:310] 
	I0731 15:13:20.730733    4988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 15:13:20.730736    4988 kubeadm.go:310] 
	I0731 15:13:20.730872    4988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 15:13:20.730984    4988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 15:13:20.731026    4988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 15:13:20.731031    4988 kubeadm.go:310] 
	I0731 15:13:20.731137    4988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 15:13:20.731286    4988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 15:13:20.731297    4988 kubeadm.go:310] 
	I0731 15:13:20.731409    4988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 464iif.j93mcmeumustwbfb \
	I0731 15:13:20.731466    4988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a \
	I0731 15:13:20.731479    4988 kubeadm.go:310] 	--control-plane 
	I0731 15:13:20.731482    4988 kubeadm.go:310] 
	I0731 15:13:20.731523    4988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 15:13:20.731527    4988 kubeadm.go:310] 
	I0731 15:13:20.731583    4988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 464iif.j93mcmeumustwbfb \
	I0731 15:13:20.731635    4988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:77f8405e6ec8b014927a913cafeac0f50b391fc962197b4a6a5507cca10a1b1a 
	I0731 15:13:20.732006    4988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 15:13:20.732155    4988 cni.go:84] Creating CNI manager for ""
	I0731 15:13:20.732164    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:13:20.735915    4988 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 15:13:20.740281    4988 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 15:13:20.743421    4988 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 15:13:20.749719    4988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 15:13:20.749824    4988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 15:13:20.749852    4988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-609000 minikube.k8s.io/updated_at=2024_07_31T15_13_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=stopped-upgrade-609000 minikube.k8s.io/primary=true
	I0731 15:13:20.804957    4988 ops.go:34] apiserver oom_adj: -16
	I0731 15:13:20.804971    4988 kubeadm.go:1113] duration metric: took 55.23225ms to wait for elevateKubeSystemPrivileges
	I0731 15:13:20.804977    4988 kubeadm.go:394] duration metric: took 4m11.845083292s to StartCluster
	I0731 15:13:20.804987    4988 settings.go:142] acquiring lock: {Name:mk4ba9457258541473c3bcf6c2e4b75027bd146e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:13:20.805080    4988 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:13:20.805484    4988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/kubeconfig: {Name:mk3ff8223f9cd933fc3424e220c63db210741fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:13:20.805702    4988 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:13:20.805733    4988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 15:13:20.805776    4988 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-609000"
	I0731 15:13:20.805791    4988 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-609000"
	W0731 15:13:20.805794    4988 addons.go:243] addon storage-provisioner should already be in state true
	I0731 15:13:20.805807    4988 host.go:66] Checking if "stopped-upgrade-609000" exists ...
	I0731 15:13:20.805786    4988 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-609000"
	I0731 15:13:20.805836    4988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-609000"
	I0731 15:13:20.805940    4988 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:13:20.807133    4988 kapi.go:59] client config for stopped-upgrade-609000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/stopped-upgrade-609000/client.key", CAFile:"/Users/jenkins/minikube-integration/19312-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101950700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 15:13:20.807264    4988 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-609000"
	W0731 15:13:20.807270    4988 addons.go:243] addon default-storageclass should already be in state true
	I0731 15:13:20.807278    4988 host.go:66] Checking if "stopped-upgrade-609000" exists ...
	I0731 15:13:20.808979    4988 out.go:177] * Verifying Kubernetes components...
	I0731 15:13:20.809508    4988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 15:13:20.811989    4988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 15:13:20.812005    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:13:20.817893    4988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 15:13:20.823927    4988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 15:13:20.827927    4988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:13:20.827938    4988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 15:13:20.827947    4988 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/stopped-upgrade-609000/id_rsa Username:docker}
	I0731 15:13:20.899567    4988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 15:13:20.905890    4988 api_server.go:52] waiting for apiserver process to appear ...
	I0731 15:13:20.905959    4988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 15:13:20.909169    4988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 15:13:20.913525    4988 api_server.go:72] duration metric: took 107.806833ms to wait for apiserver process to appear ...
	I0731 15:13:20.913537    4988 api_server.go:88] waiting for apiserver healthz status ...
	I0731 15:13:20.913546    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:20.939004    4988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 15:13:25.915554    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:25.915589    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:30.915766    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:30.915818    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:35.916031    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:35.916074    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:40.916445    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:40.916498    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:45.916987    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:45.917027    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:13:50.917874    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:50.917919    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 15:13:51.263127    4988 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 15:13:51.265925    4988 out.go:177] * Enabled addons: storage-provisioner
	I0731 15:13:51.276837    4988 addons.go:510] duration metric: took 30.471595833s for enable addons: enabled=[storage-provisioner]
	I0731 15:13:55.918946    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:13:55.918979    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:00.920144    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:00.920186    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:05.921642    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:05.921669    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:10.923572    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:10.923630    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:15.925772    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:15.925811    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:20.928066    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:20.928261    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:14:20.970198    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:14:20.970274    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:14:20.981813    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:14:20.981887    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:14:20.992511    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:14:20.992579    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:14:21.004108    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:14:21.004170    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:14:21.014415    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:14:21.014479    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:14:21.025218    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:14:21.025291    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:14:21.035262    4988 logs.go:276] 0 containers: []
	W0731 15:14:21.035274    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:14:21.035328    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:14:21.046011    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:14:21.046025    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:14:21.046031    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:14:21.080118    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:14:21.080126    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:14:21.084681    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:14:21.084689    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:14:21.100442    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:14:21.100453    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:14:21.113638    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:14:21.113652    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:14:21.125684    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:14:21.125695    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:14:21.150050    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:14:21.150059    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:14:21.186855    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:14:21.186869    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:14:21.201308    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:14:21.201319    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:14:21.214208    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:14:21.214221    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:14:21.229416    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:14:21.229425    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:14:21.251237    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:14:21.251248    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:14:21.262457    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:14:21.262468    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:14:23.776296    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:28.779012    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:28.779226    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:14:28.806490    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:14:28.806594    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:14:28.821934    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:14:28.822006    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:14:28.834812    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:14:28.834887    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:14:28.845669    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:14:28.845739    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:14:28.856060    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:14:28.856119    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:14:28.866262    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:14:28.866326    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:14:28.876490    4988 logs.go:276] 0 containers: []
	W0731 15:14:28.876500    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:14:28.876546    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:14:28.887067    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:14:28.887081    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:14:28.887086    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:14:28.910246    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:14:28.910256    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:14:28.921576    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:14:28.921586    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:14:28.925722    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:14:28.925731    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:14:28.940970    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:14:28.940983    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:14:28.954956    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:14:28.954969    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:14:28.969392    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:14:28.969405    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:14:28.980716    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:14:28.980730    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:14:28.998905    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:14:28.998915    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:14:29.033369    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:14:29.033379    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:14:29.066650    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:14:29.066664    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:14:29.081001    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:14:29.081013    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:14:29.098119    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:14:29.098131    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:14:31.614259    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:36.617043    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:36.617457    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:14:36.651473    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:14:36.651604    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:14:36.670826    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:14:36.670922    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:14:36.685026    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:14:36.685091    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:14:36.698275    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:14:36.698339    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:14:36.709340    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:14:36.709409    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:14:36.719388    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:14:36.719451    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:14:36.733851    4988 logs.go:276] 0 containers: []
	W0731 15:14:36.733861    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:14:36.733909    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:14:36.744559    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:14:36.744580    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:14:36.744585    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:14:36.779366    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:14:36.779379    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:14:36.794069    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:14:36.794082    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:14:36.808751    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:14:36.808763    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:14:36.826056    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:14:36.826068    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:14:36.841422    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:14:36.841432    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:14:36.875592    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:14:36.875600    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:14:36.880037    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:14:36.880046    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:14:36.891628    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:14:36.891641    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:14:36.908519    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:14:36.908528    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:14:36.931403    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:14:36.931413    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:14:36.943141    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:14:36.943152    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:14:36.956761    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:14:36.956772    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:14:39.471017    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:44.473772    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:44.474176    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:14:44.514791    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:14:44.514933    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:14:44.542931    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:14:44.543024    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:14:44.557723    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:14:44.557791    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:14:44.572065    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:14:44.572136    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:14:44.583078    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:14:44.583149    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:14:44.593951    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:14:44.594017    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:14:44.605443    4988 logs.go:276] 0 containers: []
	W0731 15:14:44.605454    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:14:44.605509    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:14:44.618282    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:14:44.618296    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:14:44.618301    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:14:44.650973    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:14:44.650986    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:14:44.666994    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:14:44.667005    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:14:44.681845    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:14:44.681858    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:14:44.693778    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:14:44.693791    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:14:44.710522    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:14:44.710533    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:14:44.728090    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:14:44.728099    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:14:44.752634    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:14:44.752646    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:14:44.757020    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:14:44.757029    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:14:44.794124    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:14:44.794136    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:14:44.806029    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:14:44.806041    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:14:44.817914    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:14:44.817925    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:14:44.829729    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:14:44.829742    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:14:47.343002    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:14:52.345387    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:14:52.345737    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:14:52.376075    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:14:52.376200    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:14:52.395693    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:14:52.395777    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:14:52.409720    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:14:52.409794    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:14:52.421846    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:14:52.421909    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:14:52.434112    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:14:52.434179    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:14:52.445060    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:14:52.445124    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:14:52.454887    4988 logs.go:276] 0 containers: []
	W0731 15:14:52.454896    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:14:52.454944    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:14:52.465294    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:14:52.465307    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:14:52.465311    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:14:52.477140    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:14:52.477151    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:14:52.488808    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:14:52.488817    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:14:52.500568    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:14:52.500577    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:14:52.534988    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:14:52.534999    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:14:52.551615    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:14:52.551627    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:14:52.565737    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:14:52.565748    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:14:52.580798    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:14:52.580812    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:14:52.598219    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:14:52.598228    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:14:52.610610    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:14:52.610625    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:14:52.633498    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:14:52.633508    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:14:52.637452    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:14:52.637458    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:14:52.671952    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:14:52.671966    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:14:55.188590    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:00.191272    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:00.191646    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:00.231488    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:00.231616    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:00.253252    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:00.253364    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:00.267977    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:15:00.268050    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:00.280943    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:00.281011    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:00.291791    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:00.291855    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:00.302885    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:00.302946    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:00.313283    4988 logs.go:276] 0 containers: []
	W0731 15:15:00.313293    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:00.313343    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:00.324044    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:00.324057    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:00.324063    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:00.358471    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:00.358480    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:00.372842    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:00.372852    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:00.385210    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:00.385222    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:00.403233    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:00.403243    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:00.408052    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:00.408062    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:00.442156    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:00.442168    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:00.456708    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:00.456720    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:00.470108    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:00.470121    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:00.481713    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:00.481723    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:00.493051    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:00.493062    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:00.504410    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:00.504420    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:00.527326    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:00.527336    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:03.040531    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:08.043334    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:08.043700    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:08.077996    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:08.078133    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:08.097920    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:08.098020    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:08.111464    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:15:08.111539    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:08.123425    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:08.123499    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:08.133696    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:08.133766    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:08.143728    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:08.143801    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:08.158520    4988 logs.go:276] 0 containers: []
	W0731 15:15:08.158532    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:08.158594    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:08.172780    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:08.172797    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:08.172802    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:08.186277    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:08.186290    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:08.198476    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:08.198489    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:08.221558    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:08.221569    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:08.232650    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:08.232660    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:08.267396    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:08.267402    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:08.285877    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:08.285885    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:08.302380    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:08.302393    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:08.314419    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:08.314429    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:08.328778    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:08.328787    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:08.346897    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:08.346909    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:08.358274    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:08.358283    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:08.362339    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:08.362345    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:10.900027    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:15.902612    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:15.902968    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:15.943446    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:15.943579    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:15.962582    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:15.962698    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:15.977556    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:15:15.977637    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:15.990374    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:15.990457    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:16.001175    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:16.001246    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:16.012015    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:16.012084    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:16.022264    4988 logs.go:276] 0 containers: []
	W0731 15:15:16.022273    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:16.022330    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:16.035995    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:16.036010    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:16.036014    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:16.055781    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:16.055791    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:16.067594    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:16.067606    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:16.071708    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:16.071716    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:16.086409    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:16.086419    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:16.100720    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:16.100730    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:16.112841    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:16.112851    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:16.124648    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:16.124659    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:16.139908    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:16.139918    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:16.152213    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:16.152224    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:16.163781    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:16.163791    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:16.199113    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:16.199120    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:16.233388    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:16.233398    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:18.759700    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:23.762511    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:23.762617    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:23.774006    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:23.774076    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:23.786115    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:23.786182    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:23.797082    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:15:23.797144    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:23.807389    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:23.807453    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:23.817223    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:23.817281    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:23.827717    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:23.827772    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:23.838300    4988 logs.go:276] 0 containers: []
	W0731 15:15:23.838313    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:23.838361    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:23.848432    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:23.848445    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:23.848450    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:23.883642    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:23.883651    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:23.895901    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:23.895914    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:23.913508    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:23.913518    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:23.924553    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:23.924565    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:23.928772    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:23.928781    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:23.948448    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:23.948460    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:23.962435    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:23.962446    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:23.973642    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:23.973655    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:23.985058    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:23.985068    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:23.999257    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:23.999269    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:24.010746    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:24.010759    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:24.033049    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:24.033056    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:26.567830    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:31.570658    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:31.571168    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:31.609716    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:31.609851    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:31.631417    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:31.631532    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:31.650396    4988 logs.go:276] 2 containers: [1585146d8083 88f78c60eebb]
	I0731 15:15:31.650463    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:31.662179    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:31.662248    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:31.673086    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:31.673151    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:31.684030    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:31.684094    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:31.694220    4988 logs.go:276] 0 containers: []
	W0731 15:15:31.694231    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:31.694290    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:31.706988    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:31.707004    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:31.707009    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:31.740925    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:31.740935    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:31.745290    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:31.745296    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:31.757176    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:31.757186    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:31.771779    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:31.771793    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:31.789342    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:31.789353    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:31.800411    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:31.800423    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:31.825297    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:31.825309    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:31.859002    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:31.859014    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:31.873184    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:31.873196    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:31.889421    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:31.889434    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:31.902404    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:31.902417    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:31.913978    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:31.913991    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:34.427610    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:39.428405    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:39.428463    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:39.440358    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:39.440429    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:39.457107    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:39.457168    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:39.467773    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:15:39.467834    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:39.478912    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:39.478964    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:39.490876    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:39.490951    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:39.503029    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:39.503076    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:39.513093    4988 logs.go:276] 0 containers: []
	W0731 15:15:39.513104    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:39.513157    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:39.527816    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:39.527832    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:15:39.527837    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:15:39.541276    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:15:39.541288    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:15:39.553480    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:39.553492    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:39.558118    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:39.558130    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:39.573961    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:39.573974    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:39.594565    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:39.594577    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:39.606965    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:39.606977    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:39.623257    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:39.623269    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:39.635546    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:39.635561    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:39.675201    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:39.675210    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:39.689929    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:39.689941    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:39.705516    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:39.705531    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:39.718848    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:39.718859    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:39.753778    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:39.753793    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:39.778892    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:39.778907    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:42.304886    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:47.307488    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:47.307956    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:47.343965    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:47.344096    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:47.364110    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:47.364226    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:47.379353    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:15:47.379441    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:47.391606    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:47.391674    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:47.402621    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:47.402693    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:47.418200    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:47.418264    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:47.428253    4988 logs.go:276] 0 containers: []
	W0731 15:15:47.428264    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:47.428328    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:47.438756    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:47.438778    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:47.438783    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:47.482286    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:47.482297    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:47.497230    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:47.497241    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:47.515181    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:47.515193    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:47.549920    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:47.549928    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:47.564311    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:15:47.564324    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:15:47.575721    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:15:47.575736    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:15:47.587449    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:47.587460    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:47.599055    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:47.599067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:47.610973    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:47.610988    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:47.635449    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:47.635456    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:47.651734    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:47.651748    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:47.666267    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:47.666280    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:47.677574    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:47.677588    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:47.694747    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:47.694760    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:50.202187    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:15:55.204937    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:15:55.205290    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:15:55.238928    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:15:55.239053    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:15:55.258996    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:15:55.259089    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:15:55.273825    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:15:55.273900    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:15:55.286151    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:15:55.286222    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:15:55.298187    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:15:55.298249    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:15:55.308715    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:15:55.308782    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:15:55.319265    4988 logs.go:276] 0 containers: []
	W0731 15:15:55.319275    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:15:55.319327    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:15:55.330070    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:15:55.330086    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:15:55.330093    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:15:55.345458    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:15:55.345471    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:15:55.364031    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:15:55.364042    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:15:55.400169    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:15:55.400183    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:15:55.414623    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:15:55.414633    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:15:55.425989    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:15:55.426001    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:15:55.437206    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:15:55.437216    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:15:55.448610    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:15:55.448622    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:15:55.481688    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:15:55.481696    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:15:55.493006    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:15:55.493021    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:15:55.504733    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:15:55.504742    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:15:55.529511    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:15:55.529519    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:15:55.533659    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:15:55.533664    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:15:55.545474    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:15:55.545487    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:15:55.557306    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:15:55.557317    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:15:58.072980    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:03.074383    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:03.074460    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:03.090367    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:03.090420    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:03.101885    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:03.101937    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:03.112562    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:03.112624    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:03.123820    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:03.123896    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:03.138065    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:03.138128    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:03.154513    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:03.154562    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:03.167264    4988 logs.go:276] 0 containers: []
	W0731 15:16:03.167277    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:03.167325    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:03.178130    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:03.178146    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:03.178151    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:03.212443    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:03.212456    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:03.224372    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:03.224381    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:03.240592    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:03.240604    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:03.253948    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:03.253960    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:03.258585    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:03.258596    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:03.273027    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:03.273040    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:03.286326    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:03.286341    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:03.303634    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:03.303649    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:03.322388    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:03.322400    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:03.337674    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:03.337687    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:03.358600    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:03.358612    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:03.397123    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:03.397135    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:03.416724    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:03.416737    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:03.442701    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:03.442716    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:05.958300    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:10.960953    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:10.961429    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:10.996987    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:10.997111    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:11.017472    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:11.017570    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:11.033181    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:11.033263    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:11.046770    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:11.046845    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:11.057614    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:11.057685    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:11.068394    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:11.068455    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:11.078995    4988 logs.go:276] 0 containers: []
	W0731 15:16:11.079011    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:11.079063    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:11.089458    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:11.089472    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:11.089477    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:11.122403    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:11.122418    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:11.138470    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:11.138484    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:11.150566    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:11.150580    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:11.162468    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:11.162481    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:11.178081    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:11.178093    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:11.201306    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:11.201314    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:11.205313    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:11.205322    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:11.240212    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:11.240225    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:11.254584    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:11.254598    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:11.272644    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:11.272657    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:11.284961    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:11.284975    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:11.296716    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:11.296727    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:11.308588    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:11.308601    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:11.322968    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:11.322981    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:13.836509    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:18.839315    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:18.839783    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:18.881834    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:18.881966    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:18.902875    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:18.902981    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:18.918607    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:18.918682    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:18.930549    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:18.930615    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:18.945494    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:18.945558    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:18.956282    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:18.956349    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:18.966884    4988 logs.go:276] 0 containers: []
	W0731 15:16:18.966895    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:18.966959    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:18.977292    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:18.977307    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:18.977312    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:19.011807    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:19.011819    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:19.024328    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:19.024339    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:19.035856    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:19.035867    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:19.048129    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:19.048140    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:19.082697    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:19.082705    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:19.095242    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:19.095252    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:19.109708    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:19.109717    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:19.132608    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:19.132615    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:19.137071    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:19.137080    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:19.154684    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:19.154697    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:19.176564    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:19.176578    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:19.204972    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:19.204984    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:19.217736    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:19.217748    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:19.229476    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:19.229488    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:21.746057    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:26.748771    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:26.748931    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:26.761108    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:26.761177    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:26.771824    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:26.771892    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:26.782760    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:26.782824    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:26.792746    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:26.792814    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:26.803798    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:26.803853    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:26.818207    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:26.818275    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:26.829396    4988 logs.go:276] 0 containers: []
	W0731 15:16:26.829408    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:26.829475    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:26.841080    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:26.841097    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:26.841103    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:26.856174    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:26.856189    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:26.869506    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:26.869522    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:26.882489    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:26.882500    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:26.956688    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:26.956701    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:26.982261    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:26.982271    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:27.001050    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:27.001063    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:27.026759    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:27.026773    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:27.040988    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:27.041000    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:27.076867    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:27.076886    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:27.081812    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:27.081822    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:27.094739    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:27.094753    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:27.113122    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:27.113135    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:27.128388    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:27.128398    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:27.141970    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:27.141981    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:29.658799    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:34.661410    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:34.661653    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:34.689688    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:34.689814    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:34.709015    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:34.709098    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:34.722472    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:34.722542    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:34.740825    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:34.740890    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:34.751021    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:34.751095    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:34.761211    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:34.761282    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:34.773837    4988 logs.go:276] 0 containers: []
	W0731 15:16:34.773848    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:34.773911    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:34.784819    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:34.784837    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:34.784844    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:34.796936    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:34.796946    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:34.809092    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:34.809104    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:34.820445    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:34.820459    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:34.834981    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:34.834994    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:34.857376    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:34.857386    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:34.892138    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:34.892150    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:34.905375    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:34.905387    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:34.920959    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:34.920973    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:34.932500    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:34.932513    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:34.950718    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:34.950730    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:34.975501    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:34.975511    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:34.987294    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:34.987310    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:34.991909    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:34.991916    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:35.029042    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:35.029057    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:37.545773    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:42.548279    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:42.548695    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:42.588432    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:42.588553    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:42.620066    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:42.620150    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:42.633926    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:42.634006    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:42.645540    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:42.645605    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:42.656602    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:42.656667    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:42.667445    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:42.667511    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:42.677680    4988 logs.go:276] 0 containers: []
	W0731 15:16:42.677695    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:42.677751    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:42.688827    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:42.688845    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:42.688851    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:42.710102    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:42.710113    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:42.721870    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:42.721883    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:42.736602    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:42.736615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:42.754000    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:42.754012    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:42.796496    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:42.796510    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:42.809092    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:42.809103    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:42.830071    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:42.830081    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:42.846099    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:42.846110    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:42.858126    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:42.858136    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:42.869688    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:42.869700    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:42.874085    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:42.874094    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:42.885979    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:42.885989    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:42.908572    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:42.908579    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:42.920031    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:42.920042    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:45.454958    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:50.456090    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:50.456692    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:50.494518    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:50.494646    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:50.516555    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:50.516676    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:50.531582    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:50.531658    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:50.543886    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:50.543955    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:50.555351    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:50.555411    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:50.565847    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:50.565918    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:50.576450    4988 logs.go:276] 0 containers: []
	W0731 15:16:50.576464    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:50.576513    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:50.587176    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:50.587193    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:50.587198    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:50.601912    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:50.601926    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:50.623657    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:50.623666    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:50.635534    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:50.635548    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:50.672072    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:50.672083    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:50.686603    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:50.686615    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:50.702486    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:50.702498    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:50.714706    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:50.714719    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:50.747128    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:50.747138    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:50.761125    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:50.761137    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:50.773061    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:50.773075    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:50.784811    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:50.784823    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:16:50.796211    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:50.796222    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:50.802057    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:50.802067    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:50.814021    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:50.814035    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:53.339531    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:16:58.341856    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:16:58.342430    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:16:58.383557    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:16:58.383685    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:16:58.404801    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:16:58.404900    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:16:58.420698    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:16:58.420775    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:16:58.433264    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:16:58.433333    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:16:58.450844    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:16:58.450912    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:16:58.462548    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:16:58.462611    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:16:58.472780    4988 logs.go:276] 0 containers: []
	W0731 15:16:58.472793    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:16:58.472847    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:16:58.483267    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:16:58.483285    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:16:58.483290    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:16:58.500892    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:16:58.500902    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:16:58.523853    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:16:58.523863    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:16:58.561741    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:16:58.561750    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:16:58.573513    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:16:58.573523    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:16:58.588510    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:16:58.588521    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:16:58.603357    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:16:58.603366    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:16:58.616871    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:16:58.616884    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:16:58.635741    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:16:58.635754    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:16:58.673286    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:16:58.673308    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:16:58.677640    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:16:58.677650    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:16:58.692279    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:16:58.692292    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:16:58.705657    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:16:58.705668    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:16:58.721584    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:16:58.721593    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:16:58.733392    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:16:58.733404    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:17:01.246833    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:17:06.249402    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:17:06.249874    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:17:06.289382    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:17:06.289528    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:17:06.310796    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:17:06.310909    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:17:06.326989    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:17:06.327069    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:17:06.339811    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:17:06.339881    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:17:06.356054    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:17:06.356128    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:17:06.367731    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:17:06.367801    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:17:06.378514    4988 logs.go:276] 0 containers: []
	W0731 15:17:06.378526    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:17:06.378581    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:17:06.389495    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:17:06.389511    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:17:06.389516    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:17:06.421961    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:17:06.421968    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:17:06.433459    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:17:06.433473    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:17:06.446113    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:17:06.446127    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:17:06.450407    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:17:06.450416    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:17:06.484540    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:17:06.484554    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:17:06.499710    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:17:06.499723    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:17:06.511763    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:17:06.511776    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:17:06.523230    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:17:06.523242    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:17:06.535108    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:17:06.535118    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:17:06.552518    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:17:06.552528    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:17:06.566901    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:17:06.566910    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:17:06.581951    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:17:06.581965    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:17:06.593897    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:17:06.593912    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:17:06.607995    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:17:06.608007    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:17:09.132071    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:17:14.133454    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:17:14.133611    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 15:17:14.157617    4988 logs.go:276] 1 containers: [463dfbfae6a2]
	I0731 15:17:14.157734    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 15:17:14.173096    4988 logs.go:276] 1 containers: [0f01ae831f0b]
	I0731 15:17:14.173179    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 15:17:14.186139    4988 logs.go:276] 4 containers: [83cbfda0ca66 ca736092c05a 1585146d8083 88f78c60eebb]
	I0731 15:17:14.186213    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 15:17:14.202511    4988 logs.go:276] 1 containers: [522e71f4df39]
	I0731 15:17:14.202576    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 15:17:14.212899    4988 logs.go:276] 1 containers: [518316ebf5a1]
	I0731 15:17:14.212955    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 15:17:14.223293    4988 logs.go:276] 1 containers: [ed5287ef32b1]
	I0731 15:17:14.223351    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 15:17:14.233230    4988 logs.go:276] 0 containers: []
	W0731 15:17:14.233240    4988 logs.go:278] No container was found matching "kindnet"
	I0731 15:17:14.233286    4988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 15:17:14.244279    4988 logs.go:276] 1 containers: [dcad4d6d4a45]
	I0731 15:17:14.244295    4988 logs.go:123] Gathering logs for kube-apiserver [463dfbfae6a2] ...
	I0731 15:17:14.244300    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 463dfbfae6a2"
	I0731 15:17:14.258543    4988 logs.go:123] Gathering logs for coredns [1585146d8083] ...
	I0731 15:17:14.258551    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1585146d8083"
	I0731 15:17:14.270276    4988 logs.go:123] Gathering logs for Docker ...
	I0731 15:17:14.270285    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 15:17:14.292556    4988 logs.go:123] Gathering logs for dmesg ...
	I0731 15:17:14.292566    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 15:17:14.297077    4988 logs.go:123] Gathering logs for etcd [0f01ae831f0b] ...
	I0731 15:17:14.297084    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f01ae831f0b"
	I0731 15:17:14.313165    4988 logs.go:123] Gathering logs for coredns [83cbfda0ca66] ...
	I0731 15:17:14.313177    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83cbfda0ca66"
	I0731 15:17:14.324963    4988 logs.go:123] Gathering logs for kube-proxy [518316ebf5a1] ...
	I0731 15:17:14.324974    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 518316ebf5a1"
	I0731 15:17:14.336934    4988 logs.go:123] Gathering logs for kube-controller-manager [ed5287ef32b1] ...
	I0731 15:17:14.336945    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5287ef32b1"
	I0731 15:17:14.354808    4988 logs.go:123] Gathering logs for storage-provisioner [dcad4d6d4a45] ...
	I0731 15:17:14.354818    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcad4d6d4a45"
	I0731 15:17:14.367819    4988 logs.go:123] Gathering logs for coredns [88f78c60eebb] ...
	I0731 15:17:14.367828    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88f78c60eebb"
	I0731 15:17:14.383410    4988 logs.go:123] Gathering logs for container status ...
	I0731 15:17:14.383423    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 15:17:14.395159    4988 logs.go:123] Gathering logs for kubelet ...
	I0731 15:17:14.395168    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 15:17:14.429789    4988 logs.go:123] Gathering logs for describe nodes ...
	I0731 15:17:14.429795    4988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 15:17:14.463839    4988 logs.go:123] Gathering logs for coredns [ca736092c05a] ...
	I0731 15:17:14.463851    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca736092c05a"
	I0731 15:17:14.475839    4988 logs.go:123] Gathering logs for kube-scheduler [522e71f4df39] ...
	I0731 15:17:14.475850    4988 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 522e71f4df39"
	I0731 15:17:16.992326    4988 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 15:17:21.994514    4988 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 15:17:22.000720    4988 out.go:177] 
	W0731 15:17:22.005612    4988 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 15:17:22.005642    4988 out.go:239] * 
	* 
	W0731 15:17:22.008236    4988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:22.024517    4988 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-609000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (587.11s)

                                                
                                    
x
+
TestPause/serial/Start (10.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.163670667s)

                                                
                                                
-- stdout --
	* [pause-082000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-082000" primary control-plane node in "pause-082000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-082000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-082000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-082000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-082000 -n pause-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-082000 -n pause-082000: exit status 7 (55.956667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 : exit status 80 (9.894021125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-256000" primary control-plane node in "NoKubernetes-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-256000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000: exit status 7 (64.197792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 : exit status 80 (5.262883666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-256000
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000: exit status 7 (55.760417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248077375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-256000
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000: exit status 7 (32.110583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 : exit status 80 (5.26061325s)

                                                
                                                
-- stdout --
	* [NoKubernetes-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-256000
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-256000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-256000 -n NoKubernetes-256000: exit status 7 (56.349875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0731 15:15:29.019905    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.744385833s)

                                                
                                                
-- stdout --
	* [auto-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-531000" primary control-plane node in "auto-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:15:28.900408    5326 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:15:28.900534    5326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:28.900537    5326 out.go:304] Setting ErrFile to fd 2...
	I0731 15:15:28.900539    5326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:28.900660    5326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:15:28.901722    5326 out.go:298] Setting JSON to false
	I0731 15:15:28.918019    5326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4492,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:15:28.918086    5326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:15:28.924605    5326 out.go:177] * [auto-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:15:28.931539    5326 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:15:28.931579    5326 notify.go:220] Checking for updates...
	I0731 15:15:28.938543    5326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:15:28.941516    5326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:15:28.944537    5326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:15:28.947538    5326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:15:28.948897    5326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:15:28.951894    5326 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:15:28.951955    5326 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:15:28.952036    5326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:15:28.956516    5326 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:15:28.961550    5326 start.go:297] selected driver: qemu2
	I0731 15:15:28.961557    5326 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:15:28.961571    5326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:15:28.963863    5326 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:15:28.966522    5326 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:15:28.969647    5326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:15:28.969676    5326 cni.go:84] Creating CNI manager for ""
	I0731 15:15:28.969683    5326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:15:28.969690    5326 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:15:28.969715    5326 start.go:340] cluster config:
	{Name:auto-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:15:28.973439    5326 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:15:28.980543    5326 out.go:177] * Starting "auto-531000" primary control-plane node in "auto-531000" cluster
	I0731 15:15:28.984464    5326 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:15:28.984477    5326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:15:28.984486    5326 cache.go:56] Caching tarball of preloaded images
	I0731 15:15:28.984541    5326 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:15:28.984551    5326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:15:28.984612    5326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/auto-531000/config.json ...
	I0731 15:15:28.984624    5326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/auto-531000/config.json: {Name:mk36e46a86014deca5cab29b1fcbfb81c9f672d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:15:28.984934    5326 start.go:360] acquireMachinesLock for auto-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:15:28.984968    5326 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "auto-531000"
	I0731 15:15:28.984980    5326 start.go:93] Provisioning new machine with config: &{Name:auto-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:15:28.985007    5326 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:15:28.992488    5326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:15:29.008963    5326 start.go:159] libmachine.API.Create for "auto-531000" (driver="qemu2")
	I0731 15:15:29.008993    5326 client.go:168] LocalClient.Create starting
	I0731 15:15:29.009062    5326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:15:29.009090    5326 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:29.009099    5326 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:29.009139    5326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:15:29.009160    5326 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:29.009168    5326 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:29.009577    5326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:15:29.162137    5326 main.go:141] libmachine: Creating SSH key...
	I0731 15:15:29.187886    5326 main.go:141] libmachine: Creating Disk image...
	I0731 15:15:29.187892    5326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:15:29.188076    5326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:29.197437    5326 main.go:141] libmachine: STDOUT: 
	I0731 15:15:29.197453    5326 main.go:141] libmachine: STDERR: 
	I0731 15:15:29.197496    5326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2 +20000M
	I0731 15:15:29.205485    5326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:15:29.205499    5326 main.go:141] libmachine: STDERR: 
	I0731 15:15:29.205519    5326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:29.205523    5326 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:15:29.205534    5326 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:15:29.205557    5326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6d:7c:bd:ee:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:29.207110    5326 main.go:141] libmachine: STDOUT: 
	I0731 15:15:29.207123    5326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:15:29.207141    5326 client.go:171] duration metric: took 198.147709ms to LocalClient.Create
	I0731 15:15:31.209329    5326 start.go:128] duration metric: took 2.224326917s to createHost
	I0731 15:15:31.209404    5326 start.go:83] releasing machines lock for "auto-531000", held for 2.224462542s
	W0731 15:15:31.209666    5326 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:31.216571    5326 out.go:177] * Deleting "auto-531000" in qemu2 ...
	W0731 15:15:31.245039    5326 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:31.245073    5326 start.go:729] Will try again in 5 seconds ...
	I0731 15:15:36.247212    5326 start.go:360] acquireMachinesLock for auto-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:15:36.247638    5326 start.go:364] duration metric: took 321.958µs to acquireMachinesLock for "auto-531000"
	I0731 15:15:36.247746    5326 start.go:93] Provisioning new machine with config: &{Name:auto-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:15:36.247967    5326 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:15:36.257499    5326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:15:36.291342    5326 start.go:159] libmachine.API.Create for "auto-531000" (driver="qemu2")
	I0731 15:15:36.291384    5326 client.go:168] LocalClient.Create starting
	I0731 15:15:36.291484    5326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:15:36.291543    5326 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:36.291559    5326 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:36.291627    5326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:15:36.291666    5326 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:36.291680    5326 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:36.292154    5326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:15:36.447696    5326 main.go:141] libmachine: Creating SSH key...
	I0731 15:15:36.554375    5326 main.go:141] libmachine: Creating Disk image...
	I0731 15:15:36.554382    5326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:15:36.554573    5326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:36.564055    5326 main.go:141] libmachine: STDOUT: 
	I0731 15:15:36.564067    5326 main.go:141] libmachine: STDERR: 
	I0731 15:15:36.564114    5326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2 +20000M
	I0731 15:15:36.572266    5326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:15:36.572279    5326 main.go:141] libmachine: STDERR: 
	I0731 15:15:36.572291    5326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:36.572296    5326 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:15:36.572305    5326 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:15:36.572338    5326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:25:ac:ab:5d:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/auto-531000/disk.qcow2
	I0731 15:15:36.574036    5326 main.go:141] libmachine: STDOUT: 
	I0731 15:15:36.574049    5326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:15:36.574061    5326 client.go:171] duration metric: took 282.676292ms to LocalClient.Create
	I0731 15:15:38.576134    5326 start.go:128] duration metric: took 2.328187666s to createHost
	I0731 15:15:38.576165    5326 start.go:83] releasing machines lock for "auto-531000", held for 2.328548333s
	W0731 15:15:38.576353    5326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:38.585652    5326 out.go:177] 
	W0731 15:15:38.592811    5326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:15:38.592824    5326 out.go:239] * 
	* 
	W0731 15:15:38.593983    5326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:15:38.605701    5326 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.772291625s)

                                                
                                                
-- stdout --
	* [calico-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-531000" primary control-plane node in "calico-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:15:40.732571    5444 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:15:40.732734    5444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:40.732738    5444 out.go:304] Setting ErrFile to fd 2...
	I0731 15:15:40.732740    5444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:40.732868    5444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:15:40.734033    5444 out.go:298] Setting JSON to false
	I0731 15:15:40.750522    5444 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4504,"bootTime":1722459636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:15:40.750588    5444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:15:40.756887    5444 out.go:177] * [calico-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:15:40.764867    5444 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:15:40.764932    5444 notify.go:220] Checking for updates...
	I0731 15:15:40.771733    5444 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:15:40.774783    5444 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:15:40.777842    5444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:15:40.780845    5444 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:15:40.783814    5444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:15:40.787155    5444 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:15:40.787218    5444 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:15:40.787269    5444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:15:40.791806    5444 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:15:40.798821    5444 start.go:297] selected driver: qemu2
	I0731 15:15:40.798826    5444 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:15:40.798832    5444 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:15:40.801198    5444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:15:40.804799    5444 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:15:40.807875    5444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:15:40.807889    5444 cni.go:84] Creating CNI manager for "calico"
	I0731 15:15:40.807896    5444 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 15:15:40.807923    5444 start.go:340] cluster config:
	{Name:calico-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:15:40.811493    5444 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:15:40.818782    5444 out.go:177] * Starting "calico-531000" primary control-plane node in "calico-531000" cluster
	I0731 15:15:40.822810    5444 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:15:40.822823    5444 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:15:40.822841    5444 cache.go:56] Caching tarball of preloaded images
	I0731 15:15:40.822897    5444 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:15:40.822909    5444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:15:40.822965    5444 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/calico-531000/config.json ...
	I0731 15:15:40.822975    5444 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/calico-531000/config.json: {Name:mk5fd112f8badc46af95583705cc21bc5f72b42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:15:40.823302    5444 start.go:360] acquireMachinesLock for calico-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:15:40.823330    5444 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "calico-531000"
	I0731 15:15:40.823343    5444 start.go:93] Provisioning new machine with config: &{Name:calico-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:15:40.823369    5444 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:15:40.827790    5444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:15:40.843143    5444 start.go:159] libmachine.API.Create for "calico-531000" (driver="qemu2")
	I0731 15:15:40.843167    5444 client.go:168] LocalClient.Create starting
	I0731 15:15:40.843234    5444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:15:40.843264    5444 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:40.843275    5444 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:40.843309    5444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:15:40.843332    5444 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:40.843343    5444 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:40.843729    5444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:15:40.993634    5444 main.go:141] libmachine: Creating SSH key...
	I0731 15:15:41.038180    5444 main.go:141] libmachine: Creating Disk image...
	I0731 15:15:41.038189    5444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:15:41.038371    5444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:41.047448    5444 main.go:141] libmachine: STDOUT: 
	I0731 15:15:41.047465    5444 main.go:141] libmachine: STDERR: 
	I0731 15:15:41.047516    5444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2 +20000M
	I0731 15:15:41.055734    5444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:15:41.055748    5444 main.go:141] libmachine: STDERR: 
	I0731 15:15:41.055767    5444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:41.055775    5444 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:15:41.055790    5444 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:15:41.055834    5444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:56:8b:13:60:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:41.057497    5444 main.go:141] libmachine: STDOUT: 
	I0731 15:15:41.057513    5444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:15:41.057531    5444 client.go:171] duration metric: took 214.362834ms to LocalClient.Create
	I0731 15:15:43.059713    5444 start.go:128] duration metric: took 2.236347417s to createHost
	I0731 15:15:43.059791    5444 start.go:83] releasing machines lock for "calico-531000", held for 2.236486709s
	W0731 15:15:43.059965    5444 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:43.076262    5444 out.go:177] * Deleting "calico-531000" in qemu2 ...
	W0731 15:15:43.099679    5444 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:43.099708    5444 start.go:729] Will try again in 5 seconds ...
	I0731 15:15:48.101792    5444 start.go:360] acquireMachinesLock for calico-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:15:48.102262    5444 start.go:364] duration metric: took 397.75µs to acquireMachinesLock for "calico-531000"
	I0731 15:15:48.102312    5444 start.go:93] Provisioning new machine with config: &{Name:calico-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:15:48.102634    5444 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:15:48.110166    5444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:15:48.151531    5444 start.go:159] libmachine.API.Create for "calico-531000" (driver="qemu2")
	I0731 15:15:48.151580    5444 client.go:168] LocalClient.Create starting
	I0731 15:15:48.151697    5444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:15:48.151774    5444 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:48.151792    5444 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:48.151887    5444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:15:48.151928    5444 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:48.151942    5444 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:48.152482    5444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:15:48.332326    5444 main.go:141] libmachine: Creating SSH key...
	I0731 15:15:48.415253    5444 main.go:141] libmachine: Creating Disk image...
	I0731 15:15:48.415265    5444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:15:48.415457    5444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:48.424931    5444 main.go:141] libmachine: STDOUT: 
	I0731 15:15:48.424951    5444 main.go:141] libmachine: STDERR: 
	I0731 15:15:48.425019    5444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2 +20000M
	I0731 15:15:48.433115    5444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:15:48.433142    5444 main.go:141] libmachine: STDERR: 
	I0731 15:15:48.433158    5444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:48.433162    5444 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:15:48.433169    5444 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:15:48.433196    5444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ba:ba:11:0d:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/calico-531000/disk.qcow2
	I0731 15:15:48.434988    5444 main.go:141] libmachine: STDOUT: 
	I0731 15:15:48.435003    5444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:15:48.435014    5444 client.go:171] duration metric: took 283.434291ms to LocalClient.Create
	I0731 15:15:50.437200    5444 start.go:128] duration metric: took 2.3345695s to createHost
	I0731 15:15:50.437276    5444 start.go:83] releasing machines lock for "calico-531000", held for 2.335035375s
	W0731 15:15:50.437688    5444 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:50.448325    5444 out.go:177] 
	W0731 15:15:50.457441    5444 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:15:50.457486    5444 out.go:239] * 
	* 
	W0731 15:15:50.459976    5444 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:15:50.467268    5444 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.905662292s)

                                                
                                                
-- stdout --
	* [custom-flannel-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-531000" primary control-plane node in "custom-flannel-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:15:52.803861    5564 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:15:52.803999    5564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:52.804003    5564 out.go:304] Setting ErrFile to fd 2...
	I0731 15:15:52.804005    5564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:15:52.804144    5564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:15:52.805231    5564 out.go:298] Setting JSON to false
	I0731 15:15:52.822042    5564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4516,"bootTime":1722459636,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:15:52.822155    5564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:15:52.828921    5564 out.go:177] * [custom-flannel-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:15:52.836886    5564 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:15:52.836975    5564 notify.go:220] Checking for updates...
	I0731 15:15:52.843894    5564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:15:52.846920    5564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:15:52.849851    5564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:15:52.852916    5564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:15:52.855893    5564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:15:52.859116    5564 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:15:52.859182    5564 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:15:52.859237    5564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:15:52.862835    5564 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:15:52.869835    5564 start.go:297] selected driver: qemu2
	I0731 15:15:52.869841    5564 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:15:52.869849    5564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:15:52.872154    5564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:15:52.874883    5564 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:15:52.877965    5564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:15:52.877980    5564 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 15:15:52.877988    5564 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 15:15:52.878025    5564 start.go:340] cluster config:
	{Name:custom-flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:15:52.881772    5564 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:15:52.888856    5564 out.go:177] * Starting "custom-flannel-531000" primary control-plane node in "custom-flannel-531000" cluster
	I0731 15:15:52.892706    5564 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:15:52.892726    5564 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:15:52.892738    5564 cache.go:56] Caching tarball of preloaded images
	I0731 15:15:52.892805    5564 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:15:52.892813    5564 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:15:52.892873    5564 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/custom-flannel-531000/config.json ...
	I0731 15:15:52.892884    5564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/custom-flannel-531000/config.json: {Name:mk3ac47fd93777ef64e3c4e3623ebf9824f64954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:15:52.893127    5564 start.go:360] acquireMachinesLock for custom-flannel-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:15:52.893163    5564 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "custom-flannel-531000"
	I0731 15:15:52.893176    5564 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:15:52.893208    5564 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:15:52.899887    5564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:15:52.915406    5564 start.go:159] libmachine.API.Create for "custom-flannel-531000" (driver="qemu2")
	I0731 15:15:52.915431    5564 client.go:168] LocalClient.Create starting
	I0731 15:15:52.915488    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:15:52.915519    5564 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:52.915527    5564 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:52.915569    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:15:52.915590    5564 main.go:141] libmachine: Decoding PEM data...
	I0731 15:15:52.915596    5564 main.go:141] libmachine: Parsing certificate...
	I0731 15:15:52.916066    5564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:15:53.065470    5564 main.go:141] libmachine: Creating SSH key...
	I0731 15:15:53.101792    5564 main.go:141] libmachine: Creating Disk image...
	I0731 15:15:53.101797    5564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:15:53.101996    5564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:15:53.111129    5564 main.go:141] libmachine: STDOUT: 
	I0731 15:15:53.111147    5564 main.go:141] libmachine: STDERR: 
	I0731 15:15:53.111191    5564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2 +20000M
	I0731 15:15:53.119287    5564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:15:53.119303    5564 main.go:141] libmachine: STDERR: 
	I0731 15:15:53.119330    5564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:15:53.119335    5564 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:15:53.119349    5564 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:15:53.119371    5564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:24:b7:33:05:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:15:53.121079    5564 main.go:141] libmachine: STDOUT: 
	I0731 15:15:53.121103    5564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:15:53.121121    5564 client.go:171] duration metric: took 205.689958ms to LocalClient.Create
	I0731 15:15:55.123360    5564 start.go:128] duration metric: took 2.2301555s to createHost
	I0731 15:15:55.123440    5564 start.go:83] releasing machines lock for "custom-flannel-531000", held for 2.230302125s
	W0731 15:15:55.123536    5564 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:55.128913    5564 out.go:177] * Deleting "custom-flannel-531000" in qemu2 ...
	W0731 15:15:55.161338    5564 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:15:55.161360    5564 start.go:729] Will try again in 5 seconds ...
	I0731 15:16:00.163492    5564 start.go:360] acquireMachinesLock for custom-flannel-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:00.163855    5564 start.go:364] duration metric: took 290.292µs to acquireMachinesLock for "custom-flannel-531000"
	I0731 15:16:00.163947    5564 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:00.164155    5564 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:00.172622    5564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:00.220207    5564 start.go:159] libmachine.API.Create for "custom-flannel-531000" (driver="qemu2")
	I0731 15:16:00.220264    5564 client.go:168] LocalClient.Create starting
	I0731 15:16:00.220410    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:00.220482    5564 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:00.220500    5564 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:00.220579    5564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:00.220624    5564 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:00.220639    5564 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:00.221227    5564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:00.379988    5564 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:00.614006    5564 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:00.614016    5564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:00.614265    5564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:16:00.623923    5564 main.go:141] libmachine: STDOUT: 
	I0731 15:16:00.623943    5564 main.go:141] libmachine: STDERR: 
	I0731 15:16:00.624004    5564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2 +20000M
	I0731 15:16:00.632333    5564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:00.632346    5564 main.go:141] libmachine: STDERR: 
	I0731 15:16:00.632368    5564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:16:00.632372    5564 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:00.632394    5564 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:00.632416    5564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:73:f4:a2:99:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/custom-flannel-531000/disk.qcow2
	I0731 15:16:00.634135    5564 main.go:141] libmachine: STDOUT: 
	I0731 15:16:00.634150    5564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:00.634161    5564 client.go:171] duration metric: took 413.898333ms to LocalClient.Create
	I0731 15:16:02.636247    5564 start.go:128] duration metric: took 2.472113542s to createHost
	I0731 15:16:02.636280    5564 start.go:83] releasing machines lock for "custom-flannel-531000", held for 2.472448417s
	W0731 15:16:02.636464    5564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:02.653916    5564 out.go:177] 
	W0731 15:16:02.657017    5564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:16:02.657034    5564 out.go:239] * 
	* 
	W0731 15:16:02.658575    5564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:16:02.668850    5564 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.825988041s)

                                                
                                                
-- stdout --
	* [false-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-531000" primary control-plane node in "false-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:16:05.063968    5681 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:16:05.064084    5681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:05.064088    5681 out.go:304] Setting ErrFile to fd 2...
	I0731 15:16:05.064090    5681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:05.064215    5681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:16:05.065242    5681 out.go:298] Setting JSON to false
	I0731 15:16:05.081853    5681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4529,"bootTime":1722459636,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:16:05.081916    5681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:16:05.086786    5681 out.go:177] * [false-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:16:05.094775    5681 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:16:05.094865    5681 notify.go:220] Checking for updates...
	I0731 15:16:05.102731    5681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:16:05.105733    5681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:16:05.108750    5681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:16:05.111767    5681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:16:05.113207    5681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:16:05.116030    5681 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:16:05.116104    5681 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:16:05.116156    5681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:16:05.120772    5681 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:16:05.125721    5681 start.go:297] selected driver: qemu2
	I0731 15:16:05.125726    5681 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:16:05.125732    5681 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:16:05.127840    5681 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:16:05.130723    5681 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:16:05.133894    5681 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:16:05.133917    5681 cni.go:84] Creating CNI manager for "false"
	I0731 15:16:05.133954    5681 start.go:340] cluster config:
	{Name:false-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:16:05.137353    5681 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:16:05.144713    5681 out.go:177] * Starting "false-531000" primary control-plane node in "false-531000" cluster
	I0731 15:16:05.148719    5681 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:16:05.148733    5681 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:16:05.148745    5681 cache.go:56] Caching tarball of preloaded images
	I0731 15:16:05.148802    5681 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:16:05.148809    5681 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:16:05.148884    5681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/false-531000/config.json ...
	I0731 15:16:05.148898    5681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/false-531000/config.json: {Name:mkc2a1caac1381e9167ede54de66e512f606250a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:16:05.149220    5681 start.go:360] acquireMachinesLock for false-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:05.149249    5681 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "false-531000"
	I0731 15:16:05.149260    5681 start.go:93] Provisioning new machine with config: &{Name:false-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:05.149282    5681 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:05.152762    5681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:05.167698    5681 start.go:159] libmachine.API.Create for "false-531000" (driver="qemu2")
	I0731 15:16:05.167724    5681 client.go:168] LocalClient.Create starting
	I0731 15:16:05.167787    5681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:05.167817    5681 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:05.167829    5681 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:05.167871    5681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:05.167898    5681 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:05.167908    5681 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:05.168347    5681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:05.317909    5681 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:05.448297    5681 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:05.448304    5681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:05.448543    5681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:05.458021    5681 main.go:141] libmachine: STDOUT: 
	I0731 15:16:05.458039    5681 main.go:141] libmachine: STDERR: 
	I0731 15:16:05.458084    5681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2 +20000M
	I0731 15:16:05.465964    5681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:05.465986    5681 main.go:141] libmachine: STDERR: 
	I0731 15:16:05.466011    5681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:05.466015    5681 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:05.466024    5681 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:05.466055    5681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:ef:75:24:0c:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:05.467699    5681 main.go:141] libmachine: STDOUT: 
	I0731 15:16:05.467716    5681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:05.467731    5681 client.go:171] duration metric: took 300.007584ms to LocalClient.Create
	I0731 15:16:07.469921    5681 start.go:128] duration metric: took 2.320647417s to createHost
	I0731 15:16:07.470002    5681 start.go:83] releasing machines lock for "false-531000", held for 2.320780875s
	W0731 15:16:07.470159    5681 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:07.484423    5681 out.go:177] * Deleting "false-531000" in qemu2 ...
	W0731 15:16:07.513859    5681 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:07.513915    5681 start.go:729] Will try again in 5 seconds ...
	I0731 15:16:12.516008    5681 start.go:360] acquireMachinesLock for false-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:12.516748    5681 start.go:364] duration metric: took 617.75µs to acquireMachinesLock for "false-531000"
	I0731 15:16:12.516939    5681 start.go:93] Provisioning new machine with config: &{Name:false-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:12.517267    5681 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:12.525964    5681 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:12.574064    5681 start.go:159] libmachine.API.Create for "false-531000" (driver="qemu2")
	I0731 15:16:12.574124    5681 client.go:168] LocalClient.Create starting
	I0731 15:16:12.574262    5681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:12.574332    5681 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:12.574352    5681 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:12.574412    5681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:12.574476    5681 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:12.574490    5681 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:12.575010    5681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:12.733260    5681 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:12.802623    5681 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:12.802631    5681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:12.802823    5681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:12.812624    5681 main.go:141] libmachine: STDOUT: 
	I0731 15:16:12.812646    5681 main.go:141] libmachine: STDERR: 
	I0731 15:16:12.812704    5681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2 +20000M
	I0731 15:16:12.820902    5681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:12.820919    5681 main.go:141] libmachine: STDERR: 
	I0731 15:16:12.820931    5681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:12.820935    5681 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:12.820945    5681 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:12.820970    5681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:cf:a2:c2:b9:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/false-531000/disk.qcow2
	I0731 15:16:12.822722    5681 main.go:141] libmachine: STDOUT: 
	I0731 15:16:12.822739    5681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:12.822751    5681 client.go:171] duration metric: took 248.626167ms to LocalClient.Create
	I0731 15:16:14.824904    5681 start.go:128] duration metric: took 2.307616208s to createHost
	I0731 15:16:14.824968    5681 start.go:83] releasing machines lock for "false-531000", held for 2.308213041s
	W0731 15:16:14.825349    5681 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:14.834725    5681 out.go:177] 
	W0731 15:16:14.838849    5681 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:16:14.838876    5681 out.go:239] * 
	* 
	W0731 15:16:14.840420    5681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:16:14.849748    5681 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.806737875s)

                                                
                                                
-- stdout --
	* [kindnet-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-531000" primary control-plane node in "kindnet-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:16:17.040203    5795 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:16:17.040333    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:17.040336    5795 out.go:304] Setting ErrFile to fd 2...
	I0731 15:16:17.040339    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:17.040484    5795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:16:17.041597    5795 out.go:298] Setting JSON to false
	I0731 15:16:17.058236    5795 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4541,"bootTime":1722459636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:16:17.058325    5795 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:16:17.065144    5795 out.go:177] * [kindnet-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:16:17.073040    5795 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:16:17.073093    5795 notify.go:220] Checking for updates...
	I0731 15:16:17.078426    5795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:16:17.081065    5795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:16:17.084076    5795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:16:17.087107    5795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:16:17.090010    5795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:16:17.093425    5795 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:16:17.093491    5795 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:16:17.093543    5795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:16:17.098075    5795 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:16:17.105046    5795 start.go:297] selected driver: qemu2
	I0731 15:16:17.105051    5795 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:16:17.105056    5795 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:16:17.107434    5795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:16:17.111054    5795 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:16:17.114107    5795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:16:17.114161    5795 cni.go:84] Creating CNI manager for "kindnet"
	I0731 15:16:17.114166    5795 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 15:16:17.114191    5795 start.go:340] cluster config:
	{Name:kindnet-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:16:17.117966    5795 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:16:17.124898    5795 out.go:177] * Starting "kindnet-531000" primary control-plane node in "kindnet-531000" cluster
	I0731 15:16:17.129075    5795 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:16:17.129090    5795 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:16:17.129102    5795 cache.go:56] Caching tarball of preloaded images
	I0731 15:16:17.129167    5795 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:16:17.129174    5795 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:16:17.129233    5795 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kindnet-531000/config.json ...
	I0731 15:16:17.129244    5795 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kindnet-531000/config.json: {Name:mke82a2c7f2228599ed779fe955c0fbd875a3837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:16:17.129770    5795 start.go:360] acquireMachinesLock for kindnet-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:17.129810    5795 start.go:364] duration metric: took 34.166µs to acquireMachinesLock for "kindnet-531000"
	I0731 15:16:17.129823    5795 start.go:93] Provisioning new machine with config: &{Name:kindnet-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:17.129855    5795 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:17.131416    5795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:17.147853    5795 start.go:159] libmachine.API.Create for "kindnet-531000" (driver="qemu2")
	I0731 15:16:17.147880    5795 client.go:168] LocalClient.Create starting
	I0731 15:16:17.147955    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:17.147986    5795 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:17.147995    5795 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:17.148034    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:17.148056    5795 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:17.148061    5795 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:17.148400    5795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:17.300597    5795 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:17.391390    5795 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:17.391401    5795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:17.391573    5795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:17.401467    5795 main.go:141] libmachine: STDOUT: 
	I0731 15:16:17.401492    5795 main.go:141] libmachine: STDERR: 
	I0731 15:16:17.401562    5795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2 +20000M
	I0731 15:16:17.410604    5795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:17.410623    5795 main.go:141] libmachine: STDERR: 
	I0731 15:16:17.410655    5795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:17.410661    5795 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:17.410674    5795 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:17.410703    5795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:95:68:00:84:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:17.412800    5795 main.go:141] libmachine: STDOUT: 
	I0731 15:16:17.412818    5795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:17.412838    5795 client.go:171] duration metric: took 264.958583ms to LocalClient.Create
	I0731 15:16:19.414897    5795 start.go:128] duration metric: took 2.285066125s to createHost
	I0731 15:16:19.414922    5795 start.go:83] releasing machines lock for "kindnet-531000", held for 2.285143709s
	W0731 15:16:19.414981    5795 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:19.419937    5795 out.go:177] * Deleting "kindnet-531000" in qemu2 ...
	W0731 15:16:19.432307    5795 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:19.432315    5795 start.go:729] Will try again in 5 seconds ...
	I0731 15:16:24.434449    5795 start.go:360] acquireMachinesLock for kindnet-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:24.434905    5795 start.go:364] duration metric: took 349.167µs to acquireMachinesLock for "kindnet-531000"
	I0731 15:16:24.435014    5795 start.go:93] Provisioning new machine with config: &{Name:kindnet-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:24.435227    5795 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:24.444656    5795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:24.490295    5795 start.go:159] libmachine.API.Create for "kindnet-531000" (driver="qemu2")
	I0731 15:16:24.490344    5795 client.go:168] LocalClient.Create starting
	I0731 15:16:24.490458    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:24.490512    5795 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:24.490530    5795 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:24.490589    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:24.490640    5795 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:24.490649    5795 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:24.491114    5795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:24.650990    5795 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:24.766705    5795 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:24.766714    5795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:24.766937    5795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:24.776415    5795 main.go:141] libmachine: STDOUT: 
	I0731 15:16:24.776440    5795 main.go:141] libmachine: STDERR: 
	I0731 15:16:24.776494    5795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2 +20000M
	I0731 15:16:24.785025    5795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:24.785041    5795 main.go:141] libmachine: STDERR: 
	I0731 15:16:24.785061    5795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:24.785065    5795 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:24.785076    5795 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:24.785105    5795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:eb:91:9e:c4:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kindnet-531000/disk.qcow2
	I0731 15:16:24.786915    5795 main.go:141] libmachine: STDOUT: 
	I0731 15:16:24.786931    5795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:24.786944    5795 client.go:171] duration metric: took 296.599833ms to LocalClient.Create
	I0731 15:16:26.788992    5795 start.go:128] duration metric: took 2.353793958s to createHost
	I0731 15:16:26.789011    5795 start.go:83] releasing machines lock for "kindnet-531000", held for 2.354080875s
	W0731 15:16:26.789088    5795 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:26.794366    5795 out.go:177] 
	W0731 15:16:26.798337    5795 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:16:26.798351    5795 out.go:239] * 
	* 
	W0731 15:16:26.798865    5795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:16:26.808306    5795 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.688488584s)

                                                
                                                
-- stdout --
	* [flannel-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-531000" primary control-plane node in "flannel-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:16:29.088952    5913 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:16:29.089086    5913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:29.089089    5913 out.go:304] Setting ErrFile to fd 2...
	I0731 15:16:29.089091    5913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:29.089226    5913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:16:29.090266    5913 out.go:298] Setting JSON to false
	I0731 15:16:29.106734    5913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4553,"bootTime":1722459636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:16:29.106801    5913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:16:29.113412    5913 out.go:177] * [flannel-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:16:29.121362    5913 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:16:29.121407    5913 notify.go:220] Checking for updates...
	I0731 15:16:29.128259    5913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:16:29.131285    5913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:16:29.134326    5913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:16:29.137283    5913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:16:29.140335    5913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:16:29.143624    5913 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:16:29.143711    5913 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:16:29.143762    5913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:16:29.147288    5913 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:16:29.154324    5913 start.go:297] selected driver: qemu2
	I0731 15:16:29.154331    5913 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:16:29.154338    5913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:16:29.156540    5913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:16:29.159226    5913 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:16:29.162329    5913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:16:29.162347    5913 cni.go:84] Creating CNI manager for "flannel"
	I0731 15:16:29.162354    5913 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 15:16:29.162385    5913 start.go:340] cluster config:
	{Name:flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:16:29.165849    5913 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:16:29.171311    5913 out.go:177] * Starting "flannel-531000" primary control-plane node in "flannel-531000" cluster
	I0731 15:16:29.175310    5913 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:16:29.175325    5913 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:16:29.175338    5913 cache.go:56] Caching tarball of preloaded images
	I0731 15:16:29.175415    5913 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:16:29.175422    5913 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:16:29.175491    5913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/flannel-531000/config.json ...
	I0731 15:16:29.175502    5913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/flannel-531000/config.json: {Name:mkde1414e5b6f866fb77fbe58780ce23079fe0e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:16:29.175967    5913 start.go:360] acquireMachinesLock for flannel-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:29.175998    5913 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "flannel-531000"
	I0731 15:16:29.176009    5913 start.go:93] Provisioning new machine with config: &{Name:flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:29.176035    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:29.179235    5913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:29.194901    5913 start.go:159] libmachine.API.Create for "flannel-531000" (driver="qemu2")
	I0731 15:16:29.194929    5913 client.go:168] LocalClient.Create starting
	I0731 15:16:29.194986    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:29.195017    5913 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:29.195025    5913 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:29.195063    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:29.195088    5913 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:29.195101    5913 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:29.195493    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:29.346476    5913 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:29.380225    5913 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:29.380230    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:29.380436    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:29.389625    5913 main.go:141] libmachine: STDOUT: 
	I0731 15:16:29.389644    5913 main.go:141] libmachine: STDERR: 
	I0731 15:16:29.389700    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2 +20000M
	I0731 15:16:29.397683    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:29.397698    5913 main.go:141] libmachine: STDERR: 
	I0731 15:16:29.397715    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:29.397725    5913 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:29.397736    5913 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:29.397765    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:b0:cb:74:01:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:29.399443    5913 main.go:141] libmachine: STDOUT: 
	I0731 15:16:29.399458    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:29.399476    5913 client.go:171] duration metric: took 204.545583ms to LocalClient.Create
	I0731 15:16:31.401631    5913 start.go:128] duration metric: took 2.225612875s to createHost
	I0731 15:16:31.401662    5913 start.go:83] releasing machines lock for "flannel-531000", held for 2.225696083s
	W0731 15:16:31.401683    5913 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:31.410283    5913 out.go:177] * Deleting "flannel-531000" in qemu2 ...
	W0731 15:16:31.429049    5913 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:31.429059    5913 start.go:729] Will try again in 5 seconds ...
	I0731 15:16:36.429850    5913 start.go:360] acquireMachinesLock for flannel-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:36.430398    5913 start.go:364] duration metric: took 432.833µs to acquireMachinesLock for "flannel-531000"
	I0731 15:16:36.430527    5913 start.go:93] Provisioning new machine with config: &{Name:flannel-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:36.430805    5913 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:36.439496    5913 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:36.483015    5913 start.go:159] libmachine.API.Create for "flannel-531000" (driver="qemu2")
	I0731 15:16:36.483057    5913 client.go:168] LocalClient.Create starting
	I0731 15:16:36.483224    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:36.483309    5913 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:36.483332    5913 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:36.483401    5913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:36.483453    5913 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:36.483468    5913 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:36.483983    5913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:36.641585    5913 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:36.697399    5913 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:36.697406    5913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:36.697607    5913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:36.706939    5913 main.go:141] libmachine: STDOUT: 
	I0731 15:16:36.706969    5913 main.go:141] libmachine: STDERR: 
	I0731 15:16:36.707026    5913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2 +20000M
	I0731 15:16:36.715004    5913 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:36.715019    5913 main.go:141] libmachine: STDERR: 
	I0731 15:16:36.715035    5913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:36.715038    5913 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:36.715048    5913 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:36.715084    5913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:17:4a:50:77:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/flannel-531000/disk.qcow2
	I0731 15:16:36.716724    5913 main.go:141] libmachine: STDOUT: 
	I0731 15:16:36.716739    5913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:36.716751    5913 client.go:171] duration metric: took 233.690709ms to LocalClient.Create
	I0731 15:16:38.718477    5913 start.go:128] duration metric: took 2.287685458s to createHost
	I0731 15:16:38.718534    5913 start.go:83] releasing machines lock for "flannel-531000", held for 2.288127s
	W0731 15:16:38.718725    5913 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:38.727065    5913 out.go:177] 
	W0731 15:16:38.731186    5913 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:16:38.731207    5913 out.go:239] * 
	* 
	W0731 15:16:38.732874    5913 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:16:38.740128    5913 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.914017875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-531000" primary control-plane node in "enable-default-cni-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:16:41.087111    6033 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:16:41.087247    6033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:41.087251    6033 out.go:304] Setting ErrFile to fd 2...
	I0731 15:16:41.087253    6033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:41.087380    6033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:16:41.088483    6033 out.go:298] Setting JSON to false
	I0731 15:16:41.104716    6033 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4565,"bootTime":1722459636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:16:41.104789    6033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:16:41.111226    6033 out.go:177] * [enable-default-cni-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:16:41.119034    6033 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:16:41.119096    6033 notify.go:220] Checking for updates...
	I0731 15:16:41.125933    6033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:16:41.128995    6033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:16:41.132025    6033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:16:41.134997    6033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:16:41.138026    6033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:16:41.141270    6033 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:16:41.141343    6033 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:16:41.141400    6033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:16:41.144926    6033 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:16:41.151962    6033 start.go:297] selected driver: qemu2
	I0731 15:16:41.151967    6033 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:16:41.151975    6033 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:16:41.154156    6033 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:16:41.156962    6033 out.go:177] * Automatically selected the socket_vmnet network
	E0731 15:16:41.160019    6033 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 15:16:41.160032    6033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:16:41.160058    6033 cni.go:84] Creating CNI manager for "bridge"
	I0731 15:16:41.160063    6033 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:16:41.160098    6033 start.go:340] cluster config:
	{Name:enable-default-cni-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:16:41.163499    6033 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:16:41.171011    6033 out.go:177] * Starting "enable-default-cni-531000" primary control-plane node in "enable-default-cni-531000" cluster
	I0731 15:16:41.174958    6033 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:16:41.174971    6033 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:16:41.174983    6033 cache.go:56] Caching tarball of preloaded images
	I0731 15:16:41.175033    6033 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:16:41.175038    6033 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:16:41.175085    6033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/enable-default-cni-531000/config.json ...
	I0731 15:16:41.175095    6033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/enable-default-cni-531000/config.json: {Name:mkc71bd2091d37a52e23a10c853b63f77b398dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:16:41.175411    6033 start.go:360] acquireMachinesLock for enable-default-cni-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:41.175442    6033 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "enable-default-cni-531000"
	I0731 15:16:41.175453    6033 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:41.175484    6033 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:41.182943    6033 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:41.198310    6033 start.go:159] libmachine.API.Create for "enable-default-cni-531000" (driver="qemu2")
	I0731 15:16:41.198339    6033 client.go:168] LocalClient.Create starting
	I0731 15:16:41.198395    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:41.198427    6033 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:41.198447    6033 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:41.198487    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:41.198509    6033 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:41.198515    6033 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:41.198957    6033 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:41.348917    6033 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:41.500324    6033 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:41.500333    6033 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:41.500532    6033 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:41.510265    6033 main.go:141] libmachine: STDOUT: 
	I0731 15:16:41.510285    6033 main.go:141] libmachine: STDERR: 
	I0731 15:16:41.510337    6033 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2 +20000M
	I0731 15:16:41.518318    6033 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:41.518336    6033 main.go:141] libmachine: STDERR: 
	I0731 15:16:41.518354    6033 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:41.518359    6033 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:41.518371    6033 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:41.518398    6033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:63:cf:f4:eb:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:41.520059    6033 main.go:141] libmachine: STDOUT: 
	I0731 15:16:41.520076    6033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:41.520093    6033 client.go:171] duration metric: took 321.755042ms to LocalClient.Create
	I0731 15:16:43.522286    6033 start.go:128] duration metric: took 2.346813625s to createHost
	I0731 15:16:43.522362    6033 start.go:83] releasing machines lock for "enable-default-cni-531000", held for 2.346951083s
	W0731 15:16:43.522418    6033 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:43.528193    6033 out.go:177] * Deleting "enable-default-cni-531000" in qemu2 ...
	W0731 15:16:43.554034    6033 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:43.554059    6033 start.go:729] Will try again in 5 seconds ...
	I0731 15:16:48.556198    6033 start.go:360] acquireMachinesLock for enable-default-cni-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:48.556478    6033 start.go:364] duration metric: took 214.5µs to acquireMachinesLock for "enable-default-cni-531000"
	I0731 15:16:48.556554    6033 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:48.556684    6033 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:48.566067    6033 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:48.602800    6033 start.go:159] libmachine.API.Create for "enable-default-cni-531000" (driver="qemu2")
	I0731 15:16:48.602852    6033 client.go:168] LocalClient.Create starting
	I0731 15:16:48.602961    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:48.603024    6033 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:48.603038    6033 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:48.603089    6033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:48.603134    6033 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:48.603145    6033 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:48.603614    6033 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:48.758366    6033 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:48.917633    6033 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:48.917642    6033 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:48.917841    6033 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:48.927487    6033 main.go:141] libmachine: STDOUT: 
	I0731 15:16:48.927503    6033 main.go:141] libmachine: STDERR: 
	I0731 15:16:48.927553    6033 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2 +20000M
	I0731 15:16:48.935490    6033 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:48.935514    6033 main.go:141] libmachine: STDERR: 
	I0731 15:16:48.935528    6033 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:48.935540    6033 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:48.935551    6033 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:48.935579    6033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:56:4c:d4:2d:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/enable-default-cni-531000/disk.qcow2
	I0731 15:16:48.937291    6033 main.go:141] libmachine: STDOUT: 
	I0731 15:16:48.937306    6033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:48.937317    6033 client.go:171] duration metric: took 334.465458ms to LocalClient.Create
	I0731 15:16:50.939348    6033 start.go:128] duration metric: took 2.382687167s to createHost
	I0731 15:16:50.939370    6033 start.go:83] releasing machines lock for "enable-default-cni-531000", held for 2.382918375s
	W0731 15:16:50.939461    6033 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:50.947907    6033 out.go:177] 
	W0731 15:16:50.951997    6033 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:16:50.952004    6033 out.go:239] * 
	* 
	W0731 15:16:50.952676    6033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:16:50.963922    6033 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.987571791s)

                                                
                                                
-- stdout --
	* [bridge-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-531000" primary control-plane node in "bridge-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:16:53.113245    6149 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:16:53.113368    6149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:53.113371    6149 out.go:304] Setting ErrFile to fd 2...
	I0731 15:16:53.113373    6149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:16:53.113496    6149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:16:53.114543    6149 out.go:298] Setting JSON to false
	I0731 15:16:53.130534    6149 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4577,"bootTime":1722459636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:16:53.130597    6149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:16:53.134996    6149 out.go:177] * [bridge-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:16:53.142893    6149 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:16:53.142932    6149 notify.go:220] Checking for updates...
	I0731 15:16:53.149924    6149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:16:53.152924    6149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:16:53.155950    6149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:16:53.158868    6149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:16:53.161927    6149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:16:53.165212    6149 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:16:53.165281    6149 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:16:53.165332    6149 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:16:53.169806    6149 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:16:53.176970    6149 start.go:297] selected driver: qemu2
	I0731 15:16:53.176978    6149 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:16:53.176986    6149 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:16:53.179240    6149 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:16:53.182884    6149 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:16:53.185927    6149 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:16:53.185939    6149 cni.go:84] Creating CNI manager for "bridge"
	I0731 15:16:53.185946    6149 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:16:53.185969    6149 start.go:340] cluster config:
	{Name:bridge-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:16:53.189539    6149 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:16:53.196889    6149 out.go:177] * Starting "bridge-531000" primary control-plane node in "bridge-531000" cluster
	I0731 15:16:53.200904    6149 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:16:53.200921    6149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:16:53.200937    6149 cache.go:56] Caching tarball of preloaded images
	I0731 15:16:53.200997    6149 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:16:53.201006    6149 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:16:53.201068    6149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/bridge-531000/config.json ...
	I0731 15:16:53.201079    6149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/bridge-531000/config.json: {Name:mk931fac2f66e28b6baa60a9b06779a3818d15c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:16:53.201293    6149 start.go:360] acquireMachinesLock for bridge-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:16:53.201324    6149 start.go:364] duration metric: took 26.416µs to acquireMachinesLock for "bridge-531000"
	I0731 15:16:53.201336    6149 start.go:93] Provisioning new machine with config: &{Name:bridge-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:16:53.201364    6149 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:16:53.208887    6149 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:16:53.225534    6149 start.go:159] libmachine.API.Create for "bridge-531000" (driver="qemu2")
	I0731 15:16:53.225564    6149 client.go:168] LocalClient.Create starting
	I0731 15:16:53.225622    6149 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:16:53.225651    6149 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:53.225660    6149 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:53.225700    6149 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:16:53.225722    6149 main.go:141] libmachine: Decoding PEM data...
	I0731 15:16:53.225731    6149 main.go:141] libmachine: Parsing certificate...
	I0731 15:16:53.226074    6149 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:16:53.405890    6149 main.go:141] libmachine: Creating SSH key...
	I0731 15:16:53.453851    6149 main.go:141] libmachine: Creating Disk image...
	I0731 15:16:53.453857    6149 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:16:53.454048    6149 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:16:53.463161    6149 main.go:141] libmachine: STDOUT: 
	I0731 15:16:53.463179    6149 main.go:141] libmachine: STDERR: 
	I0731 15:16:53.463235    6149 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2 +20000M
	I0731 15:16:53.471135    6149 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:16:53.471156    6149 main.go:141] libmachine: STDERR: 
	I0731 15:16:53.471172    6149 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:16:53.471178    6149 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:16:53.471186    6149 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:16:53.471225    6149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f3:9e:98:72:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:16:53.472964    6149 main.go:141] libmachine: STDOUT: 
	I0731 15:16:53.472977    6149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:16:53.472996    6149 client.go:171] duration metric: took 247.431209ms to LocalClient.Create
	I0731 15:16:55.475179    6149 start.go:128] duration metric: took 2.2738225s to createHost
	I0731 15:16:55.475308    6149 start.go:83] releasing machines lock for "bridge-531000", held for 2.274010166s
	W0731 15:16:55.475389    6149 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:55.481726    6149 out.go:177] * Deleting "bridge-531000" in qemu2 ...
	W0731 15:16:55.504890    6149 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:16:55.504917    6149 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:00.505833    6149 start.go:360] acquireMachinesLock for bridge-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:00.506389    6149 start.go:364] duration metric: took 448.25µs to acquireMachinesLock for "bridge-531000"
	I0731 15:17:00.506466    6149 start.go:93] Provisioning new machine with config: &{Name:bridge-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:00.506775    6149 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:00.521297    6149 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:17:00.572532    6149 start.go:159] libmachine.API.Create for "bridge-531000" (driver="qemu2")
	I0731 15:17:00.572588    6149 client.go:168] LocalClient.Create starting
	I0731 15:17:00.572715    6149 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:00.572786    6149 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:00.572805    6149 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:00.572865    6149 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:00.572910    6149 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:00.572923    6149 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:00.573437    6149 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:00.733508    6149 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:01.006556    6149 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:01.006570    6149 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:01.006786    6149 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:17:01.016357    6149 main.go:141] libmachine: STDOUT: 
	I0731 15:17:01.016392    6149 main.go:141] libmachine: STDERR: 
	I0731 15:17:01.016444    6149 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2 +20000M
	I0731 15:17:01.024408    6149 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:01.024420    6149 main.go:141] libmachine: STDERR: 
	I0731 15:17:01.024437    6149 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:17:01.024444    6149 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:01.024453    6149 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:01.024480    6149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:92:52:2e:aa:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/bridge-531000/disk.qcow2
	I0731 15:17:01.026579    6149 main.go:141] libmachine: STDOUT: 
	I0731 15:17:01.026604    6149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:01.026619    6149 client.go:171] duration metric: took 454.031333ms to LocalClient.Create
	I0731 15:17:03.028819    6149 start.go:128] duration metric: took 2.522016792s to createHost
	I0731 15:17:03.028912    6149 start.go:83] releasing machines lock for "bridge-531000", held for 2.522539709s
	W0731 15:17:03.029288    6149 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:03.043024    6149 out.go:177] 
	W0731 15:17:03.047087    6149 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:03.047144    6149 out.go:239] * 
	* 
	W0731 15:17:03.049962    6149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:03.059942    6149 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-531000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.887494208s)

                                                
                                                
-- stdout --
	* [kubenet-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-531000" primary control-plane node in "kubenet-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:05.219252    6260 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:05.219390    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:05.219393    6260 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:05.219396    6260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:05.219518    6260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:05.220584    6260 out.go:298] Setting JSON to false
	I0731 15:17:05.236807    6260 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4589,"bootTime":1722459636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:05.236885    6260 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:05.243388    6260 out.go:177] * [kubenet-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:05.251236    6260 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:05.251302    6260 notify.go:220] Checking for updates...
	I0731 15:17:05.258395    6260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:05.259714    6260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:05.262395    6260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:05.265388    6260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:05.268481    6260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:05.271698    6260 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:05.271768    6260 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:17:05.271817    6260 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:05.276371    6260 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:17:05.283417    6260 start.go:297] selected driver: qemu2
	I0731 15:17:05.283423    6260 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:17:05.283430    6260 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:05.285823    6260 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:17:05.288379    6260 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:17:05.291420    6260 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:05.291451    6260 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 15:17:05.291482    6260 start.go:340] cluster config:
	{Name:kubenet-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:05.295189    6260 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:05.302386    6260 out.go:177] * Starting "kubenet-531000" primary control-plane node in "kubenet-531000" cluster
	I0731 15:17:05.312749    6260 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:17:05.312765    6260 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:17:05.312775    6260 cache.go:56] Caching tarball of preloaded images
	I0731 15:17:05.312825    6260 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:17:05.312831    6260 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:17:05.312886    6260 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kubenet-531000/config.json ...
	I0731 15:17:05.312896    6260 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/kubenet-531000/config.json: {Name:mk5a61b91d36e5cd10c28e6e6ea7285e0e64acc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:17:05.313128    6260 start.go:360] acquireMachinesLock for kubenet-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:05.313159    6260 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "kubenet-531000"
	I0731 15:17:05.313170    6260 start.go:93] Provisioning new machine with config: &{Name:kubenet-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:05.313193    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:05.320275    6260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:17:05.335711    6260 start.go:159] libmachine.API.Create for "kubenet-531000" (driver="qemu2")
	I0731 15:17:05.335747    6260 client.go:168] LocalClient.Create starting
	I0731 15:17:05.335808    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:05.335838    6260 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:05.335846    6260 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:05.335887    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:05.335916    6260 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:05.335923    6260 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:05.336353    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:05.485269    6260 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:05.554339    6260 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:05.554344    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:05.554531    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:05.563897    6260 main.go:141] libmachine: STDOUT: 
	I0731 15:17:05.563915    6260 main.go:141] libmachine: STDERR: 
	I0731 15:17:05.563965    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2 +20000M
	I0731 15:17:05.572546    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:05.572569    6260 main.go:141] libmachine: STDERR: 
	I0731 15:17:05.572582    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:05.572588    6260 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:05.572602    6260 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:05.572646    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ec:0d:3c:5d:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:05.574784    6260 main.go:141] libmachine: STDOUT: 
	I0731 15:17:05.574802    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:05.574820    6260 client.go:171] duration metric: took 239.070791ms to LocalClient.Create
	I0731 15:17:07.576970    6260 start.go:128] duration metric: took 2.263786584s to createHost
	I0731 15:17:07.577061    6260 start.go:83] releasing machines lock for "kubenet-531000", held for 2.263930959s
	W0731 15:17:07.577116    6260 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:07.591049    6260 out.go:177] * Deleting "kubenet-531000" in qemu2 ...
	W0731 15:17:07.616277    6260 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:07.616304    6260 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:12.618537    6260 start.go:360] acquireMachinesLock for kubenet-531000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:12.619054    6260 start.go:364] duration metric: took 384.542µs to acquireMachinesLock for "kubenet-531000"
	I0731 15:17:12.619189    6260 start.go:93] Provisioning new machine with config: &{Name:kubenet-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:12.619408    6260 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:12.628034    6260 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 15:17:12.671648    6260 start.go:159] libmachine.API.Create for "kubenet-531000" (driver="qemu2")
	I0731 15:17:12.671703    6260 client.go:168] LocalClient.Create starting
	I0731 15:17:12.671831    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:12.671903    6260 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:12.671922    6260 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:12.671994    6260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:12.672043    6260 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:12.672055    6260 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:12.672556    6260 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:12.831494    6260 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:13.021676    6260 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:13.021685    6260 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:13.021890    6260 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:13.031404    6260 main.go:141] libmachine: STDOUT: 
	I0731 15:17:13.031432    6260 main.go:141] libmachine: STDERR: 
	I0731 15:17:13.031505    6260 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2 +20000M
	I0731 15:17:13.039440    6260 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:13.039471    6260 main.go:141] libmachine: STDERR: 
	I0731 15:17:13.039482    6260 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:13.039486    6260 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:13.039495    6260 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:13.039522    6260 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:81:b4:2a:4a:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/kubenet-531000/disk.qcow2
	I0731 15:17:13.041232    6260 main.go:141] libmachine: STDOUT: 
	I0731 15:17:13.041248    6260 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:13.041265    6260 client.go:171] duration metric: took 369.560167ms to LocalClient.Create
	I0731 15:17:15.043327    6260 start.go:128] duration metric: took 2.423940292s to createHost
	I0731 15:17:15.043364    6260 start.go:83] releasing machines lock for "kubenet-531000", held for 2.42432925s
	W0731 15:17:15.043474    6260 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:15.054307    6260 out.go:177] 
	W0731 15:17:15.057305    6260 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:15.057326    6260 out.go:239] * 
	* 
	W0731 15:17:15.058046    6260 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:15.070289    6260 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.879763958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-233000" primary control-plane node in "old-k8s-version-233000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-233000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:17.194498    6373 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:17.194643    6373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:17.194647    6373 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:17.194649    6373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:17.194798    6373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:17.195890    6373 out.go:298] Setting JSON to false
	I0731 15:17:17.212160    6373 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4601,"bootTime":1722459636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:17.212243    6373 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:17.218205    6373 out.go:177] * [old-k8s-version-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:17.226311    6373 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:17.226320    6373 notify.go:220] Checking for updates...
	I0731 15:17:17.234150    6373 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:17.237176    6373 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:17.240194    6373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:17.243129    6373 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:17.246201    6373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:17.249488    6373 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:17.249569    6373 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:17:17.249621    6373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:17.254162    6373 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:17:17.261179    6373 start.go:297] selected driver: qemu2
	I0731 15:17:17.261183    6373 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:17:17.261188    6373 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:17.263419    6373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:17:17.266218    6373 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:17:17.269282    6373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:17.269315    6373 cni.go:84] Creating CNI manager for ""
	I0731 15:17:17.269328    6373 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 15:17:17.269354    6373 start.go:340] cluster config:
	{Name:old-k8s-version-233000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:17.272890    6373 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:17.280128    6373 out.go:177] * Starting "old-k8s-version-233000" primary control-plane node in "old-k8s-version-233000" cluster
	I0731 15:17:17.284171    6373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 15:17:17.284183    6373 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 15:17:17.284190    6373 cache.go:56] Caching tarball of preloaded images
	I0731 15:17:17.284236    6373 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:17:17.284242    6373 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 15:17:17.284290    6373 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/old-k8s-version-233000/config.json ...
	I0731 15:17:17.284300    6373 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/old-k8s-version-233000/config.json: {Name:mkbca998a23f7cfffdb6d40d2dd960983ea27dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:17:17.284623    6373 start.go:360] acquireMachinesLock for old-k8s-version-233000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:17.284654    6373 start.go:364] duration metric: took 25µs to acquireMachinesLock for "old-k8s-version-233000"
	I0731 15:17:17.284665    6373 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:17.284689    6373 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:17.288196    6373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:17.303080    6373 start.go:159] libmachine.API.Create for "old-k8s-version-233000" (driver="qemu2")
	I0731 15:17:17.303105    6373 client.go:168] LocalClient.Create starting
	I0731 15:17:17.303161    6373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:17.303192    6373 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:17.303201    6373 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:17.303237    6373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:17.303259    6373 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:17.303268    6373 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:17.303701    6373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:17.454412    6373 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:17.650842    6373 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:17.650855    6373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:17.651070    6373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:17.660495    6373 main.go:141] libmachine: STDOUT: 
	I0731 15:17:17.660514    6373 main.go:141] libmachine: STDERR: 
	I0731 15:17:17.660574    6373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2 +20000M
	I0731 15:17:17.668554    6373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:17.668567    6373 main.go:141] libmachine: STDERR: 
	I0731 15:17:17.668580    6373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:17.668584    6373 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:17.668594    6373 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:17.668628    6373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:fe:2a:88:80:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:17.670279    6373 main.go:141] libmachine: STDOUT: 
	I0731 15:17:17.670306    6373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:17.670328    6373 client.go:171] duration metric: took 367.225ms to LocalClient.Create
	I0731 15:17:19.670693    6373 start.go:128] duration metric: took 2.386031708s to createHost
	I0731 15:17:19.670721    6373 start.go:83] releasing machines lock for "old-k8s-version-233000", held for 2.3860975s
	W0731 15:17:19.670759    6373 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:19.674642    6373 out.go:177] * Deleting "old-k8s-version-233000" in qemu2 ...
	W0731 15:17:19.692086    6373 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:19.692100    6373 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:24.694329    6373 start.go:360] acquireMachinesLock for old-k8s-version-233000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:24.694960    6373 start.go:364] duration metric: took 477.708µs to acquireMachinesLock for "old-k8s-version-233000"
	I0731 15:17:24.695113    6373 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:24.695477    6373 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:24.701181    6373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:24.746699    6373 start.go:159] libmachine.API.Create for "old-k8s-version-233000" (driver="qemu2")
	I0731 15:17:24.746750    6373 client.go:168] LocalClient.Create starting
	I0731 15:17:24.746869    6373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:24.746927    6373 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:24.746944    6373 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:24.747012    6373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:24.747052    6373 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:24.747068    6373 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:24.747542    6373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:24.905658    6373 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:24.980551    6373 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:24.980556    6373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:24.980735    6373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:24.990198    6373 main.go:141] libmachine: STDOUT: 
	I0731 15:17:24.990213    6373 main.go:141] libmachine: STDERR: 
	I0731 15:17:24.990274    6373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2 +20000M
	I0731 15:17:24.998173    6373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:24.998187    6373 main.go:141] libmachine: STDERR: 
	I0731 15:17:24.998206    6373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:24.998211    6373 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:24.998227    6373 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:24.998256    6373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:08:4d:6d:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:25.000037    6373 main.go:141] libmachine: STDOUT: 
	I0731 15:17:25.000052    6373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:25.000064    6373 client.go:171] duration metric: took 253.312208ms to LocalClient.Create
	I0731 15:17:27.002260    6373 start.go:128] duration metric: took 2.306783042s to createHost
	I0731 15:17:27.002346    6373 start.go:83] releasing machines lock for "old-k8s-version-233000", held for 2.307395625s
	W0731 15:17:27.002777    6373 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:27.016475    6373 out.go:177] 
	W0731 15:17:27.020476    6373 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:27.020495    6373 out.go:239] * 
	* 
	W0731 15:17:27.021876    6373 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:27.033389    6373 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (60.344666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-233000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-233000 create -f testdata/busybox.yaml: exit status 1 (30.183333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-233000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (29.143833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.500083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-233000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-233000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-233000 describe deploy/metrics-server -n kube-system: exit status 1 (26.828625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-233000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.70275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192141458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-233000" primary control-plane node in "old-k8s-version-233000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-233000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:30.318216    6427 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:30.318345    6427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:30.318348    6427 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:30.318350    6427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:30.318473    6427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:30.319593    6427 out.go:298] Setting JSON to false
	I0731 15:17:30.336303    6427 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4614,"bootTime":1722459636,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:30.336370    6427 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:30.339974    6427 out.go:177] * [old-k8s-version-233000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:30.348003    6427 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:30.348116    6427 notify.go:220] Checking for updates...
	I0731 15:17:30.354992    6427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:30.357995    6427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:30.360936    6427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:30.363999    6427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:30.367023    6427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:30.370216    6427 config.go:182] Loaded profile config "old-k8s-version-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 15:17:30.372960    6427 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 15:17:30.376000    6427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:30.379987    6427 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:17:30.387005    6427 start.go:297] selected driver: qemu2
	I0731 15:17:30.387013    6427 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:30.387079    6427 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:30.389289    6427 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:30.389310    6427 cni.go:84] Creating CNI manager for ""
	I0731 15:17:30.389316    6427 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 15:17:30.389342    6427 start.go:340] cluster config:
	{Name:old-k8s-version-233000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-233000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:30.392667    6427 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:30.400021    6427 out.go:177] * Starting "old-k8s-version-233000" primary control-plane node in "old-k8s-version-233000" cluster
	I0731 15:17:30.402954    6427 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 15:17:30.402969    6427 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 15:17:30.402983    6427 cache.go:56] Caching tarball of preloaded images
	I0731 15:17:30.403040    6427 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:17:30.403048    6427 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 15:17:30.403119    6427 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/old-k8s-version-233000/config.json ...
	I0731 15:17:30.403553    6427 start.go:360] acquireMachinesLock for old-k8s-version-233000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:30.403586    6427 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "old-k8s-version-233000"
	I0731 15:17:30.403595    6427 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:17:30.403601    6427 fix.go:54] fixHost starting: 
	I0731 15:17:30.403722    6427 fix.go:112] recreateIfNeeded on old-k8s-version-233000: state=Stopped err=<nil>
	W0731 15:17:30.403730    6427 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:17:30.407988    6427 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-233000" ...
	I0731 15:17:30.414920    6427 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:30.414950    6427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:08:4d:6d:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:30.416865    6427 main.go:141] libmachine: STDOUT: 
	I0731 15:17:30.416884    6427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:30.416910    6427 fix.go:56] duration metric: took 13.308709ms for fixHost
	I0731 15:17:30.416914    6427 start.go:83] releasing machines lock for "old-k8s-version-233000", held for 13.324584ms
	W0731 15:17:30.416929    6427 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:30.416956    6427 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:30.416960    6427 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:35.417985    6427 start.go:360] acquireMachinesLock for old-k8s-version-233000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:35.418472    6427 start.go:364] duration metric: took 338.833µs to acquireMachinesLock for "old-k8s-version-233000"
	I0731 15:17:35.418623    6427 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:17:35.418645    6427 fix.go:54] fixHost starting: 
	I0731 15:17:35.419403    6427 fix.go:112] recreateIfNeeded on old-k8s-version-233000: state=Stopped err=<nil>
	W0731 15:17:35.419432    6427 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:17:35.429043    6427 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-233000" ...
	I0731 15:17:35.433989    6427 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:35.434283    6427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:08:4d:6d:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/old-k8s-version-233000/disk.qcow2
	I0731 15:17:35.444868    6427 main.go:141] libmachine: STDOUT: 
	I0731 15:17:35.444945    6427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:35.445054    6427 fix.go:56] duration metric: took 26.411083ms for fixHost
	I0731 15:17:35.445078    6427 start.go:83] releasing machines lock for "old-k8s-version-233000", held for 26.582792ms
	W0731 15:17:35.445295    6427 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-233000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:35.453913    6427 out.go:177] 
	W0731 15:17:35.458100    6427 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:35.458148    6427 out.go:239] * 
	* 
	W0731 15:17:35.460366    6427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:35.468914    6427 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-233000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (63.343625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-233000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (32.241791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-233000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.35625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-233000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-233000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.995333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-233000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.96325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-233000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-233000 --alsologtostderr -v=1: exit status 83 (41.325625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-233000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-233000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:35.733362    6451 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:35.734259    6451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:35.734268    6451 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:35.734271    6451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:35.734407    6451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:35.734642    6451 out.go:298] Setting JSON to false
	I0731 15:17:35.734649    6451 mustload.go:65] Loading cluster: old-k8s-version-233000
	I0731 15:17:35.734865    6451 config.go:182] Loaded profile config "old-k8s-version-233000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 15:17:35.739606    6451 out.go:177] * The control-plane node old-k8s-version-233000 host is not running: state=Stopped
	I0731 15:17:35.742620    6451 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-233000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-233000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.909917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (28.667875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.881203333s)

                                                
                                                
-- stdout --
	* [no-preload-428000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-428000" primary control-plane node in "no-preload-428000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-428000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:36.049325    6468 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:36.049486    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:36.049490    6468 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:36.049492    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:36.049632    6468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:36.050715    6468 out.go:298] Setting JSON to false
	I0731 15:17:36.067124    6468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4620,"bootTime":1722459636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:36.067203    6468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:36.072299    6468 out.go:177] * [no-preload-428000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:36.079386    6468 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:36.079472    6468 notify.go:220] Checking for updates...
	I0731 15:17:36.086394    6468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:36.089430    6468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:36.092419    6468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:36.095406    6468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:36.098368    6468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:36.101578    6468 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:36.101653    6468 config.go:182] Loaded profile config "stopped-upgrade-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 15:17:36.101703    6468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:36.106370    6468 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:17:36.112250    6468 start.go:297] selected driver: qemu2
	I0731 15:17:36.112256    6468 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:17:36.112262    6468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:36.114568    6468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:17:36.117393    6468 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:17:36.120435    6468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:36.120464    6468 cni.go:84] Creating CNI manager for ""
	I0731 15:17:36.120472    6468 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:17:36.120479    6468 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:17:36.120508    6468 start.go:340] cluster config:
	{Name:no-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:36.124141    6468 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.131305    6468 out.go:177] * Starting "no-preload-428000" primary control-plane node in "no-preload-428000" cluster
	I0731 15:17:36.135374    6468 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 15:17:36.135443    6468 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/no-preload-428000/config.json ...
	I0731 15:17:36.135457    6468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/no-preload-428000/config.json: {Name:mk88d856ec64c086bbc4ee14bcf78b2910cf8254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:17:36.135456    6468 cache.go:107] acquiring lock: {Name:mkd1a0036729f2aecb30e56732968eecdf60281e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135471    6468 cache.go:107] acquiring lock: {Name:mk995eea773c24be7a62e4fa4e4145fcf0445493 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135517    6468 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 15:17:36.135510    6468 cache.go:107] acquiring lock: {Name:mkd624e419c21617ab294f7a302681ddfbee7e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135525    6468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.5µs
	I0731 15:17:36.135532    6468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 15:17:36.135537    6468 cache.go:107] acquiring lock: {Name:mkaa5909e28c05e6bd11b4de0767d7b74022374d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135457    6468 cache.go:107] acquiring lock: {Name:mk53158a07e2957681d3dd4f9adde687b1eace30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135601    6468 cache.go:107] acquiring lock: {Name:mk031db1a44fdf7329979a1417aa73d0d84e08a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135618    6468 cache.go:107] acquiring lock: {Name:mk715f00ed336394001d331c4e03cf4bf7806bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135685    6468 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 15:17:36.135709    6468 cache.go:107] acquiring lock: {Name:mk45d86d312280cbde14ab406215921f1f7c755b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:36.135730    6468 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 15:17:36.135757    6468 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 15:17:36.135788    6468 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 15:17:36.135825    6468 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 15:17:36.135857    6468 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0731 15:17:36.135889    6468 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 15:17:36.135901    6468 start.go:360] acquireMachinesLock for no-preload-428000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:36.135934    6468 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "no-preload-428000"
	I0731 15:17:36.135947    6468 start.go:93] Provisioning new machine with config: &{Name:no-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:36.135971    6468 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:36.140356    6468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:36.148907    6468 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 15:17:36.148967    6468 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 15:17:36.149007    6468 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 15:17:36.151088    6468 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 15:17:36.151652    6468 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 15:17:36.151754    6468 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 15:17:36.151788    6468 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 15:17:36.156259    6468 start.go:159] libmachine.API.Create for "no-preload-428000" (driver="qemu2")
	I0731 15:17:36.156285    6468 client.go:168] LocalClient.Create starting
	I0731 15:17:36.156361    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:36.156392    6468 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:36.156403    6468 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:36.156444    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:36.156468    6468 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:36.156478    6468 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:36.156842    6468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:36.328268    6468 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:36.404743    6468 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:36.404769    6468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:36.404993    6468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:36.415076    6468 main.go:141] libmachine: STDOUT: 
	I0731 15:17:36.415097    6468 main.go:141] libmachine: STDERR: 
	I0731 15:17:36.415148    6468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2 +20000M
	I0731 15:17:36.424581    6468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:36.424606    6468 main.go:141] libmachine: STDERR: 
	I0731 15:17:36.424625    6468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:36.424631    6468 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:36.424642    6468 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:36.424675    6468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:0e:ce:54:c0:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:36.427269    6468 main.go:141] libmachine: STDOUT: 
	I0731 15:17:36.427284    6468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:36.427301    6468 client.go:171] duration metric: took 271.015709ms to LocalClient.Create
	I0731 15:17:36.526918    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 15:17:36.536194    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 15:17:36.562137    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 15:17:36.576652    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 15:17:36.617528    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0731 15:17:36.617872    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0731 15:17:36.657758    6468 cache.go:162] opening:  /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 15:17:36.753963    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 15:17:36.753979    6468 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 618.370125ms
	I0731 15:17:36.753991    6468 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 15:17:38.427527    6468 start.go:128] duration metric: took 2.291572292s to createHost
	I0731 15:17:38.427585    6468 start.go:83] releasing machines lock for "no-preload-428000", held for 2.29167925s
	W0731 15:17:38.427628    6468 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:38.437872    6468 out.go:177] * Deleting "no-preload-428000" in qemu2 ...
	W0731 15:17:38.458103    6468 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:38.458124    6468 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:39.636389    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 15:17:39.636412    6468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.500929375s
	I0731 15:17:39.636423    6468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 15:17:39.636743    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 15:17:39.636752    6468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.501211583s
	I0731 15:17:39.636759    6468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 15:17:40.304281    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 15:17:40.304305    6468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.168898333s
	I0731 15:17:40.304318    6468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 15:17:40.634067    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 15:17:40.634091    6468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.49857475s
	I0731 15:17:40.634103    6468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 15:17:41.634021    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 15:17:41.634074    6468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 5.498701333s
	I0731 15:17:41.634127    6468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 15:17:43.458448    6468 start.go:360] acquireMachinesLock for no-preload-428000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:43.458922    6468 start.go:364] duration metric: took 389.959µs to acquireMachinesLock for "no-preload-428000"
	I0731 15:17:43.459050    6468 start.go:93] Provisioning new machine with config: &{Name:no-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:43.459340    6468 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:43.467922    6468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:43.515029    6468 start.go:159] libmachine.API.Create for "no-preload-428000" (driver="qemu2")
	I0731 15:17:43.515100    6468 client.go:168] LocalClient.Create starting
	I0731 15:17:43.515210    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:43.515278    6468 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:43.515300    6468 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:43.515376    6468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:43.515423    6468 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:43.515440    6468 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:43.515991    6468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:43.686076    6468 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:43.836093    6468 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:43.836104    6468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:43.836321    6468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:43.845595    6468 main.go:141] libmachine: STDOUT: 
	I0731 15:17:43.845624    6468 main.go:141] libmachine: STDERR: 
	I0731 15:17:43.845671    6468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2 +20000M
	I0731 15:17:43.853901    6468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:43.853918    6468 main.go:141] libmachine: STDERR: 
	I0731 15:17:43.853956    6468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:43.853961    6468 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:43.853976    6468 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:43.854023    6468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ea:a0:c0:ac:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:43.855803    6468 main.go:141] libmachine: STDOUT: 
	I0731 15:17:43.855828    6468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:43.855843    6468 client.go:171] duration metric: took 340.744041ms to LocalClient.Create
	I0731 15:17:45.301977    6468 cache.go:157] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 15:17:45.302034    6468 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.166675709s
	I0731 15:17:45.302051    6468 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 15:17:45.302085    6468 cache.go:87] Successfully saved all images to host disk.
	I0731 15:17:45.858030    6468 start.go:128] duration metric: took 2.398651542s to createHost
	I0731 15:17:45.858076    6468 start.go:83] releasing machines lock for "no-preload-428000", held for 2.399167583s
	W0731 15:17:45.858376    6468 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-428000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-428000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:45.867972    6468 out.go:177] 
	W0731 15:17:45.875107    6468 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:45.875137    6468 out.go:239] * 
	* 
	W0731 15:17:45.878055    6468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:45.887879    6468 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (64.434667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-428000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-428000 create -f testdata/busybox.yaml: exit status 1 (29.586459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-428000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-428000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (28.487291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (29.575291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-428000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-428000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-428000 describe deploy/metrics-server -n kube-system: exit status 1 (26.813375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-428000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-428000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (28.269291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.177366417s)

                                                
                                                
-- stdout --
	* [no-preload-428000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-428000" primary control-plane node in "no-preload-428000" cluster
	* Restarting existing qemu2 VM for "no-preload-428000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-428000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:50.179783    6559 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:50.179935    6559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:50.179938    6559 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:50.179941    6559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:50.180074    6559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:50.181024    6559 out.go:298] Setting JSON to false
	I0731 15:17:50.197471    6559 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4634,"bootTime":1722459636,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:50.197535    6559 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:50.201447    6559 out.go:177] * [no-preload-428000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:50.208357    6559 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:50.208459    6559 notify.go:220] Checking for updates...
	I0731 15:17:50.215464    6559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:50.218395    6559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:50.221375    6559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:50.224377    6559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:50.227306    6559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:50.230598    6559 config.go:182] Loaded profile config "no-preload-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 15:17:50.230878    6559 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:50.235399    6559 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:17:50.242386    6559 start.go:297] selected driver: qemu2
	I0731 15:17:50.242391    6559 start.go:901] validating driver "qemu2" against &{Name:no-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-428000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:50.242452    6559 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:50.244623    6559 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:50.244646    6559 cni.go:84] Creating CNI manager for ""
	I0731 15:17:50.244652    6559 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:17:50.244671    6559 start.go:340] cluster config:
	{Name:no-preload-428000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-428000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:50.247932    6559 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.255379    6559 out.go:177] * Starting "no-preload-428000" primary control-plane node in "no-preload-428000" cluster
	I0731 15:17:50.259359    6559 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 15:17:50.259457    6559 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/no-preload-428000/config.json ...
	I0731 15:17:50.259458    6559 cache.go:107] acquiring lock: {Name:mkd1a0036729f2aecb30e56732968eecdf60281e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259463    6559 cache.go:107] acquiring lock: {Name:mk031db1a44fdf7329979a1417aa73d0d84e08a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259471    6559 cache.go:107] acquiring lock: {Name:mk715f00ed336394001d331c4e03cf4bf7806bfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259510    6559 cache.go:107] acquiring lock: {Name:mk53158a07e2957681d3dd4f9adde687b1eace30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259532    6559 cache.go:107] acquiring lock: {Name:mkd624e419c21617ab294f7a302681ddfbee7e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259544    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 15:17:50.259555    6559 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 96.417µs
	I0731 15:17:50.259583    6559 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 15:17:50.259590    6559 cache.go:107] acquiring lock: {Name:mk995eea773c24be7a62e4fa4e4145fcf0445493 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259607    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 15:17:50.259613    6559 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 85.25µs
	I0731 15:17:50.259617    6559 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 15:17:50.259540    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 15:17:50.259621    6559 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 166.042µs
	I0731 15:17:50.259624    6559 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 15:17:50.259628    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 15:17:50.259629    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 15:17:50.259631    6559 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 42.125µs
	I0731 15:17:50.259634    6559 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 15:17:50.259636    6559 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 179.333µs
	I0731 15:17:50.259645    6559 cache.go:107] acquiring lock: {Name:mkaa5909e28c05e6bd11b4de0767d7b74022374d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259650    6559 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 15:17:50.259630    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 15:17:50.259659    6559 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 149.792µs
	I0731 15:17:50.259665    6559 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 15:17:50.259656    6559 cache.go:107] acquiring lock: {Name:mk45d86d312280cbde14ab406215921f1f7c755b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:50.259694    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 15:17:50.259701    6559 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 87.542µs
	I0731 15:17:50.259710    6559 cache.go:115] /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 15:17:50.259711    6559 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 15:17:50.259714    6559 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 133.25µs
	I0731 15:17:50.259720    6559 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 15:17:50.259722    6559 cache.go:87] Successfully saved all images to host disk.
	I0731 15:17:50.259838    6559 start.go:360] acquireMachinesLock for no-preload-428000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:50.259870    6559 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "no-preload-428000"
	I0731 15:17:50.259879    6559 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:17:50.259883    6559 fix.go:54] fixHost starting: 
	I0731 15:17:50.259996    6559 fix.go:112] recreateIfNeeded on no-preload-428000: state=Stopped err=<nil>
	W0731 15:17:50.260005    6559 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:17:50.268389    6559 out.go:177] * Restarting existing qemu2 VM for "no-preload-428000" ...
	I0731 15:17:50.272375    6559 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:50.272409    6559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ea:a0:c0:ac:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:50.274286    6559 main.go:141] libmachine: STDOUT: 
	I0731 15:17:50.274307    6559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:50.274334    6559 fix.go:56] duration metric: took 14.451041ms for fixHost
	I0731 15:17:50.274338    6559 start.go:83] releasing machines lock for "no-preload-428000", held for 14.464333ms
	W0731 15:17:50.274345    6559 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:50.274371    6559 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:50.274375    6559 start.go:729] Will try again in 5 seconds ...
	I0731 15:17:55.276347    6559 start.go:360] acquireMachinesLock for no-preload-428000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:55.276420    6559 start.go:364] duration metric: took 59.125µs to acquireMachinesLock for "no-preload-428000"
	I0731 15:17:55.276436    6559 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:17:55.276440    6559 fix.go:54] fixHost starting: 
	I0731 15:17:55.276578    6559 fix.go:112] recreateIfNeeded on no-preload-428000: state=Stopped err=<nil>
	W0731 15:17:55.276586    6559 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:17:55.280701    6559 out.go:177] * Restarting existing qemu2 VM for "no-preload-428000" ...
	I0731 15:17:55.287502    6559 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:55.287547    6559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ea:a0:c0:ac:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/no-preload-428000/disk.qcow2
	I0731 15:17:55.289400    6559 main.go:141] libmachine: STDOUT: 
	I0731 15:17:55.289415    6559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:55.289433    6559 fix.go:56] duration metric: took 12.992791ms for fixHost
	I0731 15:17:55.289437    6559 start.go:83] releasing machines lock for "no-preload-428000", held for 13.013ms
	W0731 15:17:55.289472    6559 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-428000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-428000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:55.304569    6559 out.go:177] 
	W0731 15:17:55.307552    6559 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:17:55.307556    6559 out.go:239] * 
	* 
	W0731 15:17:55.308033    6559 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:17:55.319538    6559 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-428000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (37.516666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.957049625s)

                                                
                                                
-- stdout --
	* [embed-certs-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-511000" primary control-plane node in "embed-certs-511000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-511000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:55.269941    6572 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:55.270073    6572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:55.270076    6572 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:55.270079    6572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:55.270194    6572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:55.271119    6572 out.go:298] Setting JSON to false
	I0731 15:17:55.287819    6572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4639,"bootTime":1722459636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:55.287889    6572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:55.292555    6572 out.go:177] * [embed-certs-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:55.304659    6572 notify.go:220] Checking for updates...
	I0731 15:17:55.307532    6572 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:55.319530    6572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:55.330518    6572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:55.338450    6572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:55.346500    6572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:55.354512    6572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:55.358675    6572 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:55.358747    6572 config.go:182] Loaded profile config "no-preload-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 15:17:55.358791    6572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:55.362504    6572 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:17:55.371501    6572 start.go:297] selected driver: qemu2
	I0731 15:17:55.371507    6572 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:17:55.371514    6572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:55.373872    6572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:17:55.377457    6572 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:17:55.381601    6572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:55.381620    6572 cni.go:84] Creating CNI manager for ""
	I0731 15:17:55.381626    6572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:17:55.381631    6572 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:17:55.381656    6572 start.go:340] cluster config:
	{Name:embed-certs-511000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:55.386440    6572 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:55.394458    6572 out.go:177] * Starting "embed-certs-511000" primary control-plane node in "embed-certs-511000" cluster
	I0731 15:17:55.398464    6572 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:17:55.398482    6572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:17:55.398489    6572 cache.go:56] Caching tarball of preloaded images
	I0731 15:17:55.398563    6572 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:17:55.398570    6572 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:17:55.398640    6572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/embed-certs-511000/config.json ...
	I0731 15:17:55.398650    6572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/embed-certs-511000/config.json: {Name:mk35e7e96d3acd9b9ae979d86f03ae351612a088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:17:55.398838    6572 start.go:360] acquireMachinesLock for embed-certs-511000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:55.398868    6572 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "embed-certs-511000"
	I0731 15:17:55.398879    6572 start.go:93] Provisioning new machine with config: &{Name:embed-certs-511000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:55.398905    6572 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:55.409402    6572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:55.425277    6572 start.go:159] libmachine.API.Create for "embed-certs-511000" (driver="qemu2")
	I0731 15:17:55.425302    6572 client.go:168] LocalClient.Create starting
	I0731 15:17:55.425403    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:55.425433    6572 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:55.425443    6572 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:55.425480    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:55.425503    6572 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:55.425511    6572 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:55.425853    6572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:55.647371    6572 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:55.740812    6572 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:55.740819    6572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:55.740984    6572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:17:55.751951    6572 main.go:141] libmachine: STDOUT: 
	I0731 15:17:55.751977    6572 main.go:141] libmachine: STDERR: 
	I0731 15:17:55.752039    6572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2 +20000M
	I0731 15:17:55.762075    6572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:55.762090    6572 main.go:141] libmachine: STDERR: 
	I0731 15:17:55.762112    6572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:17:55.762117    6572 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:55.762129    6572 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:55.762156    6572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:cd:c0:c4:1d:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:17:55.763873    6572 main.go:141] libmachine: STDOUT: 
	I0731 15:17:55.763889    6572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:55.763909    6572 client.go:171] duration metric: took 338.60825ms to LocalClient.Create
	I0731 15:17:57.766094    6572 start.go:128] duration metric: took 2.367204167s to createHost
	I0731 15:17:57.766168    6572 start.go:83] releasing machines lock for "embed-certs-511000", held for 2.36732875s
	W0731 15:17:57.766235    6572 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:57.778254    6572 out.go:177] * Deleting "embed-certs-511000" in qemu2 ...
	W0731 15:17:57.801311    6572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:17:57.801337    6572 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:02.803569    6572 start.go:360] acquireMachinesLock for embed-certs-511000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:02.804062    6572 start.go:364] duration metric: took 389.416µs to acquireMachinesLock for "embed-certs-511000"
	I0731 15:18:02.804228    6572 start.go:93] Provisioning new machine with config: &{Name:embed-certs-511000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:18:02.804673    6572 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:18:02.811359    6572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:18:02.860158    6572 start.go:159] libmachine.API.Create for "embed-certs-511000" (driver="qemu2")
	I0731 15:18:02.860222    6572 client.go:168] LocalClient.Create starting
	I0731 15:18:02.860329    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:18:02.860387    6572 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:02.860400    6572 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:02.860473    6572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:18:02.860515    6572 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:02.860528    6572 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:02.861054    6572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:18:03.027827    6572 main.go:141] libmachine: Creating SSH key...
	I0731 15:18:03.126113    6572 main.go:141] libmachine: Creating Disk image...
	I0731 15:18:03.126118    6572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:18:03.126317    6572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:18:03.135758    6572 main.go:141] libmachine: STDOUT: 
	I0731 15:18:03.135792    6572 main.go:141] libmachine: STDERR: 
	I0731 15:18:03.135839    6572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2 +20000M
	I0731 15:18:03.143849    6572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:18:03.143862    6572 main.go:141] libmachine: STDERR: 
	I0731 15:18:03.143879    6572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:18:03.143883    6572 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:18:03.143894    6572 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:03.143920    6572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0c:0c:69:22:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:18:03.145530    6572 main.go:141] libmachine: STDOUT: 
	I0731 15:18:03.145544    6572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:03.145558    6572 client.go:171] duration metric: took 285.335ms to LocalClient.Create
	I0731 15:18:05.147694    6572 start.go:128] duration metric: took 2.343007041s to createHost
	I0731 15:18:05.147749    6572 start.go:83] releasing machines lock for "embed-certs-511000", held for 2.343701292s
	W0731 15:18:05.148135    6572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-511000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-511000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:05.158755    6572 out.go:177] 
	W0731 15:18:05.165944    6572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:05.165972    6572 out.go:239] * 
	* 
	W0731 15:18:05.168404    6572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:05.177639    6572 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (70.902833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-428000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (36.65125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-428000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-428000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-428000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.150708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-428000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-428000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (32.1565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-428000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (32.058583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-428000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-428000 --alsologtostderr -v=1: exit status 83 (74.406208ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-428000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-428000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:55.587659    6591 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:55.587826    6591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:55.587830    6591 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:55.587832    6591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:55.587973    6591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:55.588171    6591 out.go:298] Setting JSON to false
	I0731 15:17:55.588179    6591 mustload.go:65] Loading cluster: no-preload-428000
	I0731 15:17:55.588374    6591 config.go:182] Loaded profile config "no-preload-428000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 15:17:55.614043    6591 out.go:177] * The control-plane node no-preload-428000 host is not running: state=Stopped
	I0731 15:17:55.624605    6591 out.go:177]   To start a cluster, run: "minikube start -p no-preload-428000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-428000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (30.989458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (28.863917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-428000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.596784542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-416000" primary control-plane node in "default-k8s-diff-port-416000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-416000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:17:56.036261    6618 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:17:56.036382    6618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:56.036387    6618 out.go:304] Setting ErrFile to fd 2...
	I0731 15:17:56.036389    6618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:17:56.036510    6618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:17:56.037608    6618 out.go:298] Setting JSON to false
	I0731 15:17:56.053562    6618 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4640,"bootTime":1722459636,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:17:56.053638    6618 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:17:56.058483    6618 out.go:177] * [default-k8s-diff-port-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:17:56.065447    6618 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:17:56.065537    6618 notify.go:220] Checking for updates...
	I0731 15:17:56.070778    6618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:17:56.073504    6618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:17:56.076532    6618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:17:56.079573    6618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:17:56.082519    6618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:17:56.085819    6618 config.go:182] Loaded profile config "embed-certs-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:56.085890    6618 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:17:56.085928    6618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:17:56.090571    6618 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:17:56.097449    6618 start.go:297] selected driver: qemu2
	I0731 15:17:56.097455    6618 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:17:56.097462    6618 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:17:56.099816    6618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:17:56.102557    6618 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:17:56.105552    6618 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:17:56.105570    6618 cni.go:84] Creating CNI manager for ""
	I0731 15:17:56.105577    6618 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:17:56.105581    6618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:17:56.105612    6618 start.go:340] cluster config:
	{Name:default-k8s-diff-port-416000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:17:56.109388    6618 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:17:56.116305    6618 out.go:177] * Starting "default-k8s-diff-port-416000" primary control-plane node in "default-k8s-diff-port-416000" cluster
	I0731 15:17:56.120531    6618 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:17:56.120548    6618 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:17:56.120559    6618 cache.go:56] Caching tarball of preloaded images
	I0731 15:17:56.120629    6618 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:17:56.120637    6618 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:17:56.120710    6618 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/default-k8s-diff-port-416000/config.json ...
	I0731 15:17:56.120721    6618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/default-k8s-diff-port-416000/config.json: {Name:mkba993b60300b3a33a97cce8f65a71a5db218b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:17:56.121038    6618 start.go:360] acquireMachinesLock for default-k8s-diff-port-416000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:17:57.766316    6618 start.go:364] duration metric: took 1.645279083s to acquireMachinesLock for "default-k8s-diff-port-416000"
	I0731 15:17:57.766485    6618 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:17:57.766662    6618 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:17:57.771286    6618 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:17:57.820814    6618 start.go:159] libmachine.API.Create for "default-k8s-diff-port-416000" (driver="qemu2")
	I0731 15:17:57.820858    6618 client.go:168] LocalClient.Create starting
	I0731 15:17:57.820994    6618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:17:57.821052    6618 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:57.821073    6618 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:57.821143    6618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:17:57.821188    6618 main.go:141] libmachine: Decoding PEM data...
	I0731 15:17:57.821203    6618 main.go:141] libmachine: Parsing certificate...
	I0731 15:17:57.821942    6618 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:17:57.983117    6618 main.go:141] libmachine: Creating SSH key...
	I0731 15:17:58.117144    6618 main.go:141] libmachine: Creating Disk image...
	I0731 15:17:58.117150    6618 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:17:58.117347    6618 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:17:58.126818    6618 main.go:141] libmachine: STDOUT: 
	I0731 15:17:58.126832    6618 main.go:141] libmachine: STDERR: 
	I0731 15:17:58.126874    6618 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2 +20000M
	I0731 15:17:58.134635    6618 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:17:58.134649    6618 main.go:141] libmachine: STDERR: 
	I0731 15:17:58.134661    6618 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:17:58.134665    6618 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:17:58.134677    6618 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:17:58.134700    6618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:d9:de:b0:da:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:17:58.136295    6618 main.go:141] libmachine: STDOUT: 
	I0731 15:17:58.136313    6618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:17:58.136332    6618 client.go:171] duration metric: took 315.471292ms to LocalClient.Create
	I0731 15:18:00.138587    6618 start.go:128] duration metric: took 2.371864042s to createHost
	I0731 15:18:00.138640    6618 start.go:83] releasing machines lock for "default-k8s-diff-port-416000", held for 2.37230525s
	W0731 15:18:00.138704    6618 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:00.153603    6618 out.go:177] * Deleting "default-k8s-diff-port-416000" in qemu2 ...
	W0731 15:18:00.185872    6618 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:00.185914    6618 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:05.188009    6618 start.go:360] acquireMachinesLock for default-k8s-diff-port-416000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:05.188415    6618 start.go:364] duration metric: took 314.875µs to acquireMachinesLock for "default-k8s-diff-port-416000"
	I0731 15:18:05.188544    6618 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:18:05.188836    6618 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:18:05.198778    6618 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:18:05.249100    6618 start.go:159] libmachine.API.Create for "default-k8s-diff-port-416000" (driver="qemu2")
	I0731 15:18:05.249146    6618 client.go:168] LocalClient.Create starting
	I0731 15:18:05.249241    6618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:18:05.249292    6618 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:05.249309    6618 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:05.249378    6618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:18:05.249407    6618 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:05.249422    6618 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:05.249899    6618 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:18:05.444428    6618 main.go:141] libmachine: Creating SSH key...
	I0731 15:18:05.538408    6618 main.go:141] libmachine: Creating Disk image...
	I0731 15:18:05.538415    6618 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:18:05.538618    6618 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:18:05.552456    6618 main.go:141] libmachine: STDOUT: 
	I0731 15:18:05.552475    6618 main.go:141] libmachine: STDERR: 
	I0731 15:18:05.552527    6618 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2 +20000M
	I0731 15:18:05.560642    6618 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:18:05.560657    6618 main.go:141] libmachine: STDERR: 
	I0731 15:18:05.560675    6618 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:18:05.560680    6618 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:18:05.560690    6618 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:05.560716    6618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f2:08:07:a7:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:18:05.562297    6618 main.go:141] libmachine: STDOUT: 
	I0731 15:18:05.562312    6618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:05.562324    6618 client.go:171] duration metric: took 313.176833ms to LocalClient.Create
	I0731 15:18:07.564490    6618 start.go:128] duration metric: took 2.375665958s to createHost
	I0731 15:18:07.564534    6618 start.go:83] releasing machines lock for "default-k8s-diff-port-416000", held for 2.376134708s
	W0731 15:18:07.564881    6618 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:07.574523    6618 out.go:177] 
	W0731 15:18:07.578587    6618 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:07.578612    6618 out.go:239] * 
	* 
	W0731 15:18:07.581204    6618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:07.591499    6618 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (63.73325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-511000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-511000 create -f testdata/busybox.yaml: exit status 1 (33.207917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-511000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-511000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (32.724666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (32.567167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-511000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-511000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-511000 describe deploy/metrics-server -n kube-system: exit status 1 (27.940416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-511000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-511000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (29.167541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-416000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-416000 create -f testdata/busybox.yaml: exit status 1 (29.80075ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-416000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (27.366333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (29.252292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-416000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-416000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-416000 describe deploy/metrics-server -n kube-system: exit status 1 (26.6675ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-416000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (29.207584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.182971542s)

                                                
                                                
-- stdout --
	* [embed-certs-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-511000" primary control-plane node in "embed-certs-511000" cluster
	* Restarting existing qemu2 VM for "embed-certs-511000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-511000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:08.903270    6690 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:08.903405    6690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:08.903411    6690 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:08.903414    6690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:08.903542    6690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:08.904553    6690 out.go:298] Setting JSON to false
	I0731 15:18:08.920449    6690 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4652,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:18:08.920529    6690 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:18:08.924416    6690 out.go:177] * [embed-certs-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:18:08.931352    6690 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:18:08.931418    6690 notify.go:220] Checking for updates...
	I0731 15:18:08.938292    6690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:18:08.941292    6690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:18:08.944333    6690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:18:08.947280    6690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:18:08.950364    6690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:18:08.953658    6690 config.go:182] Loaded profile config "embed-certs-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:08.953906    6690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:18:08.958298    6690 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:18:08.965362    6690 start.go:297] selected driver: qemu2
	I0731 15:18:08.965369    6690 start.go:901] validating driver "qemu2" against &{Name:embed-certs-511000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:08.965459    6690 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:18:08.967667    6690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:18:08.967708    6690 cni.go:84] Creating CNI manager for ""
	I0731 15:18:08.967715    6690 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:18:08.967736    6690 start.go:340] cluster config:
	{Name:embed-certs-511000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-511000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:08.971299    6690 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:18:08.979345    6690 out.go:177] * Starting "embed-certs-511000" primary control-plane node in "embed-certs-511000" cluster
	I0731 15:18:08.983104    6690 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:18:08.983119    6690 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:18:08.983130    6690 cache.go:56] Caching tarball of preloaded images
	I0731 15:18:08.983191    6690 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:18:08.983198    6690 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:18:08.983250    6690 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/embed-certs-511000/config.json ...
	I0731 15:18:08.983696    6690 start.go:360] acquireMachinesLock for embed-certs-511000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:08.983732    6690 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "embed-certs-511000"
	I0731 15:18:08.983742    6690 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:08.983747    6690 fix.go:54] fixHost starting: 
	I0731 15:18:08.983864    6690 fix.go:112] recreateIfNeeded on embed-certs-511000: state=Stopped err=<nil>
	W0731 15:18:08.983872    6690 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:08.992360    6690 out.go:177] * Restarting existing qemu2 VM for "embed-certs-511000" ...
	I0731 15:18:08.996272    6690 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:08.996306    6690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0c:0c:69:22:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:18:08.998409    6690 main.go:141] libmachine: STDOUT: 
	I0731 15:18:08.998431    6690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:08.998469    6690 fix.go:56] duration metric: took 14.722667ms for fixHost
	I0731 15:18:08.998474    6690 start.go:83] releasing machines lock for "embed-certs-511000", held for 14.737667ms
	W0731 15:18:08.998482    6690 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:08.998520    6690 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:08.998525    6690 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:14.000619    6690 start.go:360] acquireMachinesLock for embed-certs-511000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:14.001056    6690 start.go:364] duration metric: took 321.541µs to acquireMachinesLock for "embed-certs-511000"
	I0731 15:18:14.001177    6690 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:14.001198    6690 fix.go:54] fixHost starting: 
	I0731 15:18:14.001881    6690 fix.go:112] recreateIfNeeded on embed-certs-511000: state=Stopped err=<nil>
	W0731 15:18:14.001910    6690 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:14.006429    6690 out.go:177] * Restarting existing qemu2 VM for "embed-certs-511000" ...
	I0731 15:18:14.014360    6690 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:14.014628    6690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0c:0c:69:22:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/embed-certs-511000/disk.qcow2
	I0731 15:18:14.023807    6690 main.go:141] libmachine: STDOUT: 
	I0731 15:18:14.023867    6690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:14.023954    6690 fix.go:56] duration metric: took 22.76025ms for fixHost
	I0731 15:18:14.023968    6690 start.go:83] releasing machines lock for "embed-certs-511000", held for 22.888833ms
	W0731 15:18:14.024113    6690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-511000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-511000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:14.033385    6690 out.go:177] 
	W0731 15:18:14.037386    6690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:14.037436    6690 out.go:239] * 
	* 
	W0731 15:18:14.040245    6690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:14.045365    6690 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-511000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (66.702042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.804097667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-416000" primary control-plane node in "default-k8s-diff-port-416000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:11.469840    6714 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:11.469973    6714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:11.469976    6714 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:11.469979    6714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:11.470115    6714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:11.471154    6714 out.go:298] Setting JSON to false
	I0731 15:18:11.487144    6714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4655,"bootTime":1722459636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:18:11.487214    6714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:18:11.490870    6714 out.go:177] * [default-k8s-diff-port-416000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:18:11.497899    6714 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:18:11.497957    6714 notify.go:220] Checking for updates...
	I0731 15:18:11.504876    6714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:18:11.507853    6714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:18:11.510817    6714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:18:11.513806    6714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:18:11.516820    6714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:18:11.520055    6714 config.go:182] Loaded profile config "default-k8s-diff-port-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:11.520320    6714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:18:11.524784    6714 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:18:11.531846    6714 start.go:297] selected driver: qemu2
	I0731 15:18:11.531852    6714 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:11.531927    6714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:18:11.534403    6714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:18:11.534457    6714 cni.go:84] Creating CNI manager for ""
	I0731 15:18:11.534465    6714 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:18:11.534497    6714 start.go:340] cluster config:
	{Name:default-k8s-diff-port-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-416000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:11.538237    6714 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:18:11.545753    6714 out.go:177] * Starting "default-k8s-diff-port-416000" primary control-plane node in "default-k8s-diff-port-416000" cluster
	I0731 15:18:11.549803    6714 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:18:11.549818    6714 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 15:18:11.549830    6714 cache.go:56] Caching tarball of preloaded images
	I0731 15:18:11.549886    6714 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:18:11.549894    6714 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:18:11.549970    6714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/default-k8s-diff-port-416000/config.json ...
	I0731 15:18:11.550421    6714 start.go:360] acquireMachinesLock for default-k8s-diff-port-416000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:11.550457    6714 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "default-k8s-diff-port-416000"
	I0731 15:18:11.550484    6714 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:11.550491    6714 fix.go:54] fixHost starting: 
	I0731 15:18:11.550604    6714 fix.go:112] recreateIfNeeded on default-k8s-diff-port-416000: state=Stopped err=<nil>
	W0731 15:18:11.550613    6714 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:11.554853    6714 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-416000" ...
	I0731 15:18:11.562763    6714 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:11.562799    6714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f2:08:07:a7:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:18:11.564837    6714 main.go:141] libmachine: STDOUT: 
	I0731 15:18:11.564856    6714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:11.564881    6714 fix.go:56] duration metric: took 14.391ms for fixHost
	I0731 15:18:11.564886    6714 start.go:83] releasing machines lock for "default-k8s-diff-port-416000", held for 14.42475ms
	W0731 15:18:11.564892    6714 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:11.564925    6714 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:11.564930    6714 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:16.567026    6714 start.go:360] acquireMachinesLock for default-k8s-diff-port-416000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:17.163655    6714 start.go:364] duration metric: took 596.551834ms to acquireMachinesLock for "default-k8s-diff-port-416000"
	I0731 15:18:17.163833    6714 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:17.163895    6714 fix.go:54] fixHost starting: 
	I0731 15:18:17.164674    6714 fix.go:112] recreateIfNeeded on default-k8s-diff-port-416000: state=Stopped err=<nil>
	W0731 15:18:17.164703    6714 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:17.174107    6714 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-416000" ...
	I0731 15:18:17.188191    6714 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:17.188469    6714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f2:08:07:a7:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/default-k8s-diff-port-416000/disk.qcow2
	I0731 15:18:17.200952    6714 main.go:141] libmachine: STDOUT: 
	I0731 15:18:17.201030    6714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:17.201112    6714 fix.go:56] duration metric: took 37.220583ms for fixHost
	I0731 15:18:17.201134    6714 start.go:83] releasing machines lock for "default-k8s-diff-port-416000", held for 37.454ms
	W0731 15:18:17.201336    6714 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:17.210014    6714 out.go:177] 
	W0731 15:18:17.215262    6714 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:17.215291    6714 out.go:239] * 
	* 
	W0731 15:18:17.217058    6714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:17.232158    6714 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-416000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (63.477166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-511000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (31.78275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-511000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-511000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-511000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.884458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-511000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-511000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (29.39375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-511000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (28.8395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-511000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-511000 --alsologtostderr -v=1: exit status 83 (41.178667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-511000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-511000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:14.308815    6733 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:14.308982    6733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:14.308986    6733 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:14.308988    6733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:14.309128    6733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:14.309362    6733 out.go:298] Setting JSON to false
	I0731 15:18:14.309369    6733 mustload.go:65] Loading cluster: embed-certs-511000
	I0731 15:18:14.309561    6733 config.go:182] Loaded profile config "embed-certs-511000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:14.314756    6733 out.go:177] * The control-plane node embed-certs-511000 host is not running: state=Stopped
	I0731 15:18:14.318430    6733 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-511000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-511000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (28.394417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (28.719459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-511000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.2245455s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:14.619237    6750 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:14.619346    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:14.619349    6750 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:14.619351    6750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:14.619477    6750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:14.620563    6750 out.go:298] Setting JSON to false
	I0731 15:18:14.636700    6750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4658,"bootTime":1722459636,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:18:14.636762    6750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:18:14.641434    6750 out.go:177] * [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:18:14.648565    6750 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:18:14.648600    6750 notify.go:220] Checking for updates...
	I0731 15:18:14.654562    6750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:18:14.657571    6750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:18:14.658971    6750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:18:14.661544    6750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:18:14.664556    6750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:18:14.667947    6750 config.go:182] Loaded profile config "default-k8s-diff-port-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:14.668010    6750 config.go:182] Loaded profile config "multinode-740000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:14.668058    6750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:18:14.672521    6750 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 15:18:14.679630    6750 start.go:297] selected driver: qemu2
	I0731 15:18:14.679637    6750 start.go:901] validating driver "qemu2" against <nil>
	I0731 15:18:14.679645    6750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:18:14.681921    6750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 15:18:14.681945    6750 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 15:18:14.690523    6750 out.go:177] * Automatically selected the socket_vmnet network
	I0731 15:18:14.693619    6750 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 15:18:14.693637    6750 cni.go:84] Creating CNI manager for ""
	I0731 15:18:14.693643    6750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:18:14.693648    6750 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:18:14.693682    6750 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:14.697529    6750 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:18:14.705580    6750 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0731 15:18:14.709462    6750 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 15:18:14.709483    6750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 15:18:14.709493    6750 cache.go:56] Caching tarball of preloaded images
	I0731 15:18:14.709574    6750 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:18:14.709582    6750 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 15:18:14.709645    6750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/newest-cni-529000/config.json ...
	I0731 15:18:14.709660    6750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/newest-cni-529000/config.json: {Name:mkf7f1016a8c1cb9963f87c22b2d5d5d76f644d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 15:18:14.710052    6750 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:14.710099    6750 start.go:364] duration metric: took 34.208µs to acquireMachinesLock for "newest-cni-529000"
	I0731 15:18:14.710115    6750 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:18:14.710145    6750 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:18:14.716505    6750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:18:14.734702    6750 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0731 15:18:14.734736    6750 client.go:168] LocalClient.Create starting
	I0731 15:18:14.734811    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:18:14.734842    6750 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:14.734856    6750 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:14.734891    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:18:14.734915    6750 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:14.734924    6750 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:14.735366    6750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:18:14.884715    6750 main.go:141] libmachine: Creating SSH key...
	I0731 15:18:15.141515    6750 main.go:141] libmachine: Creating Disk image...
	I0731 15:18:15.141523    6750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:18:15.141768    6750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:15.151523    6750 main.go:141] libmachine: STDOUT: 
	I0731 15:18:15.151539    6750 main.go:141] libmachine: STDERR: 
	I0731 15:18:15.151583    6750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0731 15:18:15.159492    6750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:18:15.159504    6750 main.go:141] libmachine: STDERR: 
	I0731 15:18:15.159519    6750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:15.159524    6750 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:18:15.159536    6750 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:15.159564    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a2:6f:16:0f:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:15.161179    6750 main.go:141] libmachine: STDOUT: 
	I0731 15:18:15.161197    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:15.161220    6750 client.go:171] duration metric: took 426.485583ms to LocalClient.Create
	I0731 15:18:17.163399    6750 start.go:128] duration metric: took 2.453272375s to createHost
	I0731 15:18:17.163457    6750 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.45338675s
	W0731 15:18:17.163634    6750 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:17.185168    6750 out.go:177] * Deleting "newest-cni-529000" in qemu2 ...
	W0731 15:18:17.243187    6750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:17.243231    6750 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:22.245439    6750 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:22.245901    6750 start.go:364] duration metric: took 366.166µs to acquireMachinesLock for "newest-cni-529000"
	I0731 15:18:22.246013    6750 start.go:93] Provisioning new machine with config: &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:18:22.246377    6750 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 15:18:22.252026    6750 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 15:18:22.304433    6750 start.go:159] libmachine.API.Create for "newest-cni-529000" (driver="qemu2")
	I0731 15:18:22.304486    6750 client.go:168] LocalClient.Create starting
	I0731 15:18:22.304606    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/ca.pem
	I0731 15:18:22.304667    6750 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:22.304683    6750 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:22.304767    6750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1411/.minikube/certs/cert.pem
	I0731 15:18:22.304812    6750 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:22.304824    6750 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:22.305500    6750 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 15:18:22.467112    6750 main.go:141] libmachine: Creating SSH key...
	I0731 15:18:22.757531    6750 main.go:141] libmachine: Creating Disk image...
	I0731 15:18:22.757544    6750 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 15:18:22.757763    6750 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:22.767510    6750 main.go:141] libmachine: STDOUT: 
	I0731 15:18:22.767539    6750 main.go:141] libmachine: STDERR: 
	I0731 15:18:22.767593    6750 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2 +20000M
	I0731 15:18:22.775552    6750 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 15:18:22.775579    6750 main.go:141] libmachine: STDERR: 
	I0731 15:18:22.775593    6750 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:22.775600    6750 main.go:141] libmachine: Starting QEMU VM...
	I0731 15:18:22.775607    6750 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:22.775647    6750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:50:e6:d0:16:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:22.777294    6750 main.go:141] libmachine: STDOUT: 
	I0731 15:18:22.777307    6750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:22.777323    6750 client.go:171] duration metric: took 472.838667ms to LocalClient.Create
	I0731 15:18:24.779591    6750 start.go:128] duration metric: took 2.533162666s to createHost
	I0731 15:18:24.779682    6750 start.go:83] releasing machines lock for "newest-cni-529000", held for 2.533796958s
	W0731 15:18:24.780095    6750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:24.790586    6750 out.go:177] 
	W0731 15:18:24.794595    6750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:24.794625    6750 out.go:239] * 
	* 
	W0731 15:18:24.797686    6750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:24.803598    6750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (69.9695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-416000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (31.67975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-416000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.681041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-416000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-416000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (27.775125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-416000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (28.67775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-416000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-416000 --alsologtostderr -v=1: exit status 83 (41.814375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-416000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:17.492802    6775 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:17.492953    6775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:17.492956    6775 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:17.492959    6775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:17.493091    6775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:17.493308    6775 out.go:298] Setting JSON to false
	I0731 15:18:17.493318    6775 mustload.go:65] Loading cluster: default-k8s-diff-port-416000
	I0731 15:18:17.493533    6775 config.go:182] Loaded profile config "default-k8s-diff-port-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:18:17.498113    6775 out.go:177] * The control-plane node default-k8s-diff-port-416000 host is not running: state=Stopped
	I0731 15:18:17.504134    6775 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-416000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-416000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (27.759583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (27.971916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.171446542s)

                                                
                                                
-- stdout --
	* [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-529000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:28.364395    6822 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:28.364523    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:28.364526    6822 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:28.364529    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:28.364701    6822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:28.365752    6822 out.go:298] Setting JSON to false
	I0731 15:18:28.381796    6822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4672,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 15:18:28.381878    6822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:18:28.384188    6822 out.go:177] * [newest-cni-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 15:18:28.391613    6822 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 15:18:28.391693    6822 notify.go:220] Checking for updates...
	I0731 15:18:28.397578    6822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 15:18:28.400592    6822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 15:18:28.401889    6822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:18:28.404552    6822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 15:18:28.407591    6822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:18:28.410930    6822 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 15:18:28.411181    6822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:18:28.415589    6822 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 15:18:28.422531    6822 start.go:297] selected driver: qemu2
	I0731 15:18:28.422538    6822 start.go:901] validating driver "qemu2" against &{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:28.422598    6822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:18:28.424988    6822 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 15:18:28.425011    6822 cni.go:84] Creating CNI manager for ""
	I0731 15:18:28.425017    6822 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:18:28.425053    6822 start.go:340] cluster config:
	{Name:newest-cni-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-529000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:18:28.428562    6822 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:18:28.435525    6822 out.go:177] * Starting "newest-cni-529000" primary control-plane node in "newest-cni-529000" cluster
	I0731 15:18:28.439546    6822 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 15:18:28.439560    6822 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 15:18:28.439571    6822 cache.go:56] Caching tarball of preloaded images
	I0731 15:18:28.439634    6822 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 15:18:28.439641    6822 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 15:18:28.439685    6822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/newest-cni-529000/config.json ...
	I0731 15:18:28.440117    6822 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:28.440154    6822 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "newest-cni-529000"
	I0731 15:18:28.440165    6822 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:28.440171    6822 fix.go:54] fixHost starting: 
	I0731 15:18:28.440284    6822 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0731 15:18:28.440292    6822 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:28.444592    6822 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0731 15:18:28.452634    6822 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:28.452688    6822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:50:e6:d0:16:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:28.454654    6822 main.go:141] libmachine: STDOUT: 
	I0731 15:18:28.454673    6822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:28.454701    6822 fix.go:56] duration metric: took 14.531417ms for fixHost
	I0731 15:18:28.454705    6822 start.go:83] releasing machines lock for "newest-cni-529000", held for 14.547791ms
	W0731 15:18:28.454768    6822 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:28.454804    6822 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:28.454809    6822 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:33.456955    6822 start.go:360] acquireMachinesLock for newest-cni-529000: {Name:mk17e480d3379583d41ee1c3967103aa4bcd5746 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 15:18:33.457366    6822 start.go:364] duration metric: took 318.5µs to acquireMachinesLock for "newest-cni-529000"
	I0731 15:18:33.457502    6822 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:33.457522    6822 fix.go:54] fixHost starting: 
	I0731 15:18:33.458172    6822 fix.go:112] recreateIfNeeded on newest-cni-529000: state=Stopped err=<nil>
	W0731 15:18:33.458197    6822 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 15:18:33.463782    6822 out.go:177] * Restarting existing qemu2 VM for "newest-cni-529000" ...
	I0731 15:18:33.467727    6822 qemu.go:418] Using hvf for hardware acceleration
	I0731 15:18:33.467924    6822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:50:e6:d0:16:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19312-1411/.minikube/machines/newest-cni-529000/disk.qcow2
	I0731 15:18:33.476953    6822 main.go:141] libmachine: STDOUT: 
	I0731 15:18:33.477040    6822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 15:18:33.477141    6822 fix.go:56] duration metric: took 19.617625ms for fixHost
	I0731 15:18:33.477164    6822 start.go:83] releasing machines lock for "newest-cni-529000", held for 19.774083ms
	W0731 15:18:33.477455    6822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-529000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 15:18:33.483768    6822 out.go:177] 
	W0731 15:18:33.487746    6822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 15:18:33.487773    6822 out.go:239] * 
	* 
	W0731 15:18:33.490652    6822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:18:33.496764    6822 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-529000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (66.060292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-529000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (29.458333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1: exit status 83 (40.8285ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-529000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:18:33.677763    6836 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:18:33.677918    6836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:33.677930    6836 out.go:304] Setting ErrFile to fd 2...
	I0731 15:18:33.677933    6836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:18:33.678048    6836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 15:18:33.678277    6836 out.go:298] Setting JSON to false
	I0731 15:18:33.678283    6836 mustload.go:65] Loading cluster: newest-cni-529000
	I0731 15:18:33.678476    6836 config.go:182] Loaded profile config "newest-cni-529000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 15:18:33.682749    6836 out.go:177] * The control-plane node newest-cni-529000 host is not running: state=Stopped
	I0731 15:18:33.686762    6836 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-529000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-529000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (29.238416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (29.776917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-529000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 11.8
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.24
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 209.33
38 TestAddons/serial/Volcano 39.91
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.18
43 TestAddons/parallel/Ingress 20.01
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.26
48 TestAddons/parallel/CSI 46.72
49 TestAddons/parallel/Headlamp 17.53
50 TestAddons/parallel/CloudSpanner 5.16
51 TestAddons/parallel/LocalPath 40.79
52 TestAddons/parallel/NvidiaDevicePlugin 5.14
53 TestAddons/parallel/Yakd 11.2
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 11.19
65 TestErrorSpam/setup 35.48
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.71
69 TestErrorSpam/unpause 0.6
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 49.74
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.56
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.54
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 62.89
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.66
93 TestFunctional/serial/LogsFileCmd 0.6
94 TestFunctional/serial/InvalidService 4.17
96 TestFunctional/parallel/ConfigCmd 0.21
97 TestFunctional/parallel/DashboardCmd 8.67
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.88
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.42
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.41
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
120 TestFunctional/parallel/License 0.23
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.09
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.03
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.08
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.1
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 5.02
142 TestFunctional/parallel/MountCmd/specific-port 0.92
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
144 TestFunctional/parallel/Version/short 0.03
145 TestFunctional/parallel/Version/components 0.17
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.69
151 TestFunctional/parallel/ImageCommands/Setup 1.66
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.32
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
159 TestFunctional/parallel/DockerEnv/bash 0.27
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 187.92
170 TestMultiControlPlane/serial/DeployApp 4.4
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 53.21
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
175 TestMultiControlPlane/serial/CopyFile 4.14
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 77.96
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 2.02
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.96
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.44
286 TestNoKubernetes/serial/Stop 3.42
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
300 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
303 TestStartStop/group/old-k8s-version/serial/Stop 2.86
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 3.87
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
327 TestStartStop/group/embed-certs/serial/Stop 3.25
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.45
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.27
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-010000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-010000: exit status 85 (94.131834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-010000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |          |
	|         | -p download-only-010000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:26:08
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:26:08.650089    1915 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:26:08.650269    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:08.650275    1915 out.go:304] Setting ErrFile to fd 2...
	I0731 14:26:08.650278    1915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:08.650403    1915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	W0731 14:26:08.650541    1915 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1411/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1411/.minikube/config/config.json: no such file or directory
	I0731 14:26:08.651861    1915 out.go:298] Setting JSON to true
	I0731 14:26:08.669389    1915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1532,"bootTime":1722459636,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:26:08.669472    1915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:26:08.673851    1915 out.go:97] [download-only-010000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:26:08.673984    1915 notify.go:220] Checking for updates...
	W0731 14:26:08.674010    1915 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 14:26:08.677751    1915 out.go:169] MINIKUBE_LOCATION=19312
	I0731 14:26:08.680778    1915 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:26:08.687891    1915 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:26:08.691823    1915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:26:08.694777    1915 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	W0731 14:26:08.702722    1915 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:26:08.702909    1915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:26:08.706854    1915 out.go:97] Using the qemu2 driver based on user configuration
	I0731 14:26:08.706877    1915 start.go:297] selected driver: qemu2
	I0731 14:26:08.706883    1915 start.go:901] validating driver "qemu2" against <nil>
	I0731 14:26:08.706969    1915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:26:08.709789    1915 out.go:169] Automatically selected the socket_vmnet network
	I0731 14:26:08.715580    1915 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 14:26:08.715710    1915 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:26:08.715758    1915 cni.go:84] Creating CNI manager for ""
	I0731 14:26:08.715776    1915 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 14:26:08.715821    1915 start.go:340] cluster config:
	{Name:download-only-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-010000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:26:08.721527    1915 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:26:08.724830    1915 out.go:97] Downloading VM boot image ...
	I0731 14:26:08.724855    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 14:26:15.086061    1915 out.go:97] Starting "download-only-010000" primary control-plane node in "download-only-010000" cluster
	I0731 14:26:15.086087    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:15.150445    1915 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:15.150458    1915 cache.go:56] Caching tarball of preloaded images
	I0731 14:26:15.150644    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:15.157783    1915 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 14:26:15.157791    1915 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:15.236048    1915 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:22.197669    1915 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:22.197856    1915 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:22.893289    1915 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 14:26:22.893476    1915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-010000/config.json ...
	I0731 14:26:22.893492    1915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-010000/config.json: {Name:mk96c76876e8a3ab2d7cc57c5d91f2c6bf7fab17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:26:22.893707    1915 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:26:22.893890    1915 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 14:26:23.263094    1915 out.go:169] 
	W0731 14:26:23.268107    1915 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0 0x106bf9aa0] Decompressors:map[bz2:0x14000813800 gz:0x14000813808 tar:0x140008137b0 tar.bz2:0x140008137c0 tar.gz:0x140008137d0 tar.xz:0x140008137e0 tar.zst:0x140008137f0 tbz2:0x140008137c0 tgz:0x140008137d0 txz:0x140008137e0 tzst:0x140008137f0 xz:0x14000813810 zip:0x14000813820 zst:0x14000813818] Getters:map[file:0x14000816740 http:0x140009c4500 https:0x140009c45a0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 14:26:23.268148    1915 out_reason.go:110] 
	W0731 14:26:23.274014    1915 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 14:26:23.277968    1915 out.go:169] 
	
	
	* The control-plane node download-only-010000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-010000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-010000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-654000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-654000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (11.800010834s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-654000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-654000: exit status 85 (78.522875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-010000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |                     |
	|         | -p download-only-010000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| delete  | -p download-only-010000        | download-only-010000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| start   | -o=json --download-only        | download-only-654000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |                     |
	|         | -p download-only-654000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:26:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:26:23.675772    1948 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:26:23.675911    1948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:23.675915    1948 out.go:304] Setting ErrFile to fd 2...
	I0731 14:26:23.675917    1948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:23.676040    1948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:26:23.677122    1948 out.go:298] Setting JSON to true
	I0731 14:26:23.693095    1948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1547,"bootTime":1722459636,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:26:23.693161    1948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:26:23.697786    1948 out.go:97] [download-only-654000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:26:23.697868    1948 notify.go:220] Checking for updates...
	I0731 14:26:23.701880    1948 out.go:169] MINIKUBE_LOCATION=19312
	I0731 14:26:23.704864    1948 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:26:23.708897    1948 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:26:23.711893    1948 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:26:23.714896    1948 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	W0731 14:26:23.721834    1948 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:26:23.721978    1948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:26:23.724856    1948 out.go:97] Using the qemu2 driver based on user configuration
	I0731 14:26:23.724865    1948 start.go:297] selected driver: qemu2
	I0731 14:26:23.724868    1948 start.go:901] validating driver "qemu2" against <nil>
	I0731 14:26:23.724925    1948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:26:23.727824    1948 out.go:169] Automatically selected the socket_vmnet network
	I0731 14:26:23.733039    1948 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 14:26:23.733120    1948 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:26:23.733162    1948 cni.go:84] Creating CNI manager for ""
	I0731 14:26:23.733173    1948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 14:26:23.733185    1948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 14:26:23.733255    1948 start.go:340] cluster config:
	{Name:download-only-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:26:23.736714    1948 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:26:23.739863    1948 out.go:97] Starting "download-only-654000" primary control-plane node in "download-only-654000" cluster
	I0731 14:26:23.739869    1948 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:26:23.796093    1948 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:26:23.796132    1948 cache.go:56] Caching tarball of preloaded images
	I0731 14:26:23.796291    1948 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:26:23.801373    1948 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 14:26:23.801380    1948 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:23.888310    1948 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 14:26:30.813439    1948 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:30.813607    1948 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:31.356650    1948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:26:31.356857    1948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-654000/config.json ...
	I0731 14:26:31.356878    1948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-654000/config.json: {Name:mk2d9364297ff6bd8b8f29afcc265140ca75fb3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:26:31.357107    1948 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:26:31.357229    1948 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-654000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-654000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-654000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-583000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-583000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (12.235464458s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-583000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-583000: exit status 85 (76.259709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-010000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |                     |
	|         | -p download-only-010000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| delete  | -p download-only-010000             | download-only-010000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| start   | -o=json --download-only             | download-only-654000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |                     |
	|         | -p download-only-654000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| delete  | -p download-only-654000             | download-only-654000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT | 31 Jul 24 14:26 PDT |
	| start   | -o=json --download-only             | download-only-583000 | jenkins | v1.33.1 | 31 Jul 24 14:26 PDT |                     |
	|         | -p download-only-583000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:26:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:26:35.765872    1972 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:26:35.765997    1972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:35.766001    1972 out.go:304] Setting ErrFile to fd 2...
	I0731 14:26:35.766003    1972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:26:35.766139    1972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:26:35.767186    1972 out.go:298] Setting JSON to true
	I0731 14:26:35.783206    1972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1559,"bootTime":1722459636,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:26:35.783265    1972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:26:35.787817    1972 out.go:97] [download-only-583000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:26:35.787934    1972 notify.go:220] Checking for updates...
	I0731 14:26:35.791732    1972 out.go:169] MINIKUBE_LOCATION=19312
	I0731 14:26:35.795831    1972 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:26:35.799712    1972 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:26:35.802809    1972 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:26:35.805817    1972 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	W0731 14:26:35.811776    1972 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:26:35.811916    1972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:26:35.814817    1972 out.go:97] Using the qemu2 driver based on user configuration
	I0731 14:26:35.814825    1972 start.go:297] selected driver: qemu2
	I0731 14:26:35.814829    1972 start.go:901] validating driver "qemu2" against <nil>
	I0731 14:26:35.814868    1972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:26:35.816287    1972 out.go:169] Automatically selected the socket_vmnet network
	I0731 14:26:35.820947    1972 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 14:26:35.821034    1972 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:26:35.821052    1972 cni.go:84] Creating CNI manager for ""
	I0731 14:26:35.821061    1972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 14:26:35.821067    1972 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 14:26:35.821109    1972 start.go:340] cluster config:
	{Name:download-only-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:26:35.824570    1972 iso.go:125] acquiring lock: {Name:mkc6b2cf7fc042c03894f5c3a7761b899ed1e8e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:26:35.827791    1972 out.go:97] Starting "download-only-583000" primary control-plane node in "download-only-583000" cluster
	I0731 14:26:35.827797    1972 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:26:35.882322    1972 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:35.882337    1972 cache.go:56] Caching tarball of preloaded images
	I0731 14:26:35.882499    1972 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:26:35.887638    1972 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 14:26:35.887645    1972 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:35.962868    1972 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 14:26:43.222292    1972 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:43.222459    1972 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 14:26:43.742062    1972 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 14:26:43.742261    1972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-583000/config.json ...
	I0731 14:26:43.742276    1972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/download-only-583000/config.json: {Name:mkb594fa0a749172369a874277b9a1a58e502fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:26:43.742621    1972 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:26:43.742758    1972 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1411/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-583000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-583000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-583000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-615000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-615000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-615000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-941000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-941000: exit status 85 (50.746917ms)

                                                
                                                
-- stdout --
	* Profile "addons-941000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-941000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-941000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-941000: exit status 85 (54.189333ms)

                                                
                                                
-- stdout --
	* Profile "addons-941000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-941000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-941000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-941000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m29.33280325s)
--- PASS: TestAddons/Setup (209.33s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.91s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 6.891125ms
addons_test.go:905: volcano-admission stabilized in 6.925917ms
addons_test.go:913: volcano-controller stabilized in 7.124459ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-b8pbp" [d8a73718-c937-493c-9bb9-29a76e3fc4ed] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004025959s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-h7rdx" [6c239fe1-618e-4975-9679-5f27cc87522e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003837792s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-hclcz" [21ce8679-c425-43a6-ad13-91431e406ce6] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003737542s
addons_test.go:932: (dbg) Run:  kubectl --context addons-941000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-941000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-941000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a7defbb5-7343-46fa-b201-b82075c4c571] Pending
helpers_test.go:344: "test-job-nginx-0" [a7defbb5-7343-46fa-b201-b82075c4c571] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a7defbb5-7343-46fa-b201-b82075c4c571] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003680625s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable volcano --alsologtostderr -v=1: (9.695830792s)
--- PASS: TestAddons/serial/Volcano (39.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-941000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-941000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.316083ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-5zjrw" [568e379e-d3b6-483c-bed5-1be57e4d1e36] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003608125s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lc4lt" [8b15a305-481b-4958-80a1-248120dcd59f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004092042s
addons_test.go:342: (dbg) Run:  kubectl --context addons-941000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-941000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-941000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.905386208s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 ip
2024/07/31 14:31:27 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-941000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-941000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-941000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9909049e-2f3d-4f82-aa4a-87e6b16727e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9909049e-2f3d-4f82-aa4a-87e6b16727e8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003980333s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-941000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable ingress-dns --alsologtostderr -v=1: (1.247035125s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable ingress --alsologtostderr -v=1: (7.197939542s)
--- PASS: TestAddons/parallel/Ingress (20.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nlnlb" [88421ea4-26d0-4581-b3ed-1e96bd6669f0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004236042s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-941000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-941000: (5.215404417s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.540625ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-9w84t" [f2ed7270-5fe8-4e21-9726-52dece220f8b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003825916s
addons_test.go:417: (dbg) Run:  kubectl --context addons-941000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.355709ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-941000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-941000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a08fdaaf-90c8-43d2-8155-579bf2caf489] Pending
helpers_test.go:344: "task-pv-pod" [a08fdaaf-90c8-43d2-8155-579bf2caf489] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a08fdaaf-90c8-43d2-8155-579bf2caf489] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004983083s
addons_test.go:590: (dbg) Run:  kubectl --context addons-941000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-941000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-941000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-941000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-941000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-941000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-941000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bd5c1e04-4747-4439-ae6e-66cbf1287cae] Pending
helpers_test.go:344: "task-pv-pod-restore" [bd5c1e04-4747-4439-ae6e-66cbf1287cae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bd5c1e04-4747-4439-ae6e-66cbf1287cae] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0037985s
addons_test.go:632: (dbg) Run:  kubectl --context addons-941000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-941000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-941000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.074823708s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-941000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tdzpd" [60def0aa-2588-424a-b41c-b312d9da1cce] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tdzpd" [60def0aa-2588-424a-b41c-b312d9da1cce] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003759625s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable headlamp --alsologtostderr -v=1: (5.198214792s)
--- PASS: TestAddons/parallel/Headlamp (17.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-cqgdm" [fdb0c8d9-b0ad-49ec-b8bf-d5bdf4b1616c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00406575s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-941000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-941000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-941000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dac5b908-aa1c-400f-8274-1ec6cf3f4a66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dac5b908-aa1c-400f-8274-1ec6cf3f4a66] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dac5b908-aa1c-400f-8274-1ec6cf3f4a66] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003501584s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-941000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 ssh "cat /opt/local-path-provisioner/pvc-9768a01a-3c53-4f32-aa62-056df3941bc6_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-941000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-941000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.343099458s)
--- PASS: TestAddons/parallel/LocalPath (40.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zmk82" [b5e60c00-0133-4220-9960-e5f98702e239] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004173917s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-941000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6dgmn" [c244a6eb-c1a2-46bc-b9d8-1723bc5c810a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002998s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-941000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-941000 addons disable yakd --alsologtostderr -v=1: (5.192867166s)
--- PASS: TestAddons/parallel/Yakd (11.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-941000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-941000: (12.201762958s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-941000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-941000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-941000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.19s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.19s)

                                                
                                    
x
+
TestErrorSpam/setup (35.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-643000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-643000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 --driver=qemu2 : (35.47937025s)
--- PASS: TestErrorSpam/setup (35.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop: (12.196245917s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop: (26.059609083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-643000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-643000 stop: (26.030560625s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19312-1411/.minikube/files/etc/test/nested/copy/1913/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0731 14:35:18.286819    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.297328    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.309369    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.331420    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.373478    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.455520    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.617573    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:18.939665    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:19.581840    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:20.863982    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:23.426040    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:28.548083    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-430000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.736540542s)
--- PASS: TestFunctional/serial/StartWithProxy (49.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --alsologtostderr -v=8
E0731 14:35:38.790002    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:35:59.271653    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-430000 --alsologtostderr -v=8: (37.557630417s)
functional_test.go:663: soft start took 37.558000042s for "functional-430000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-430000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2013937522/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache add minikube-local-cache-test:functional-430000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache delete minikube-local-cache-test:functional-430000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-430000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.151291ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 kubectl -- --context functional-430000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-430000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 14:36:40.232777    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-430000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.892453333s)
functional_test.go:761: restart took 1m2.892558417s for "functional-430000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-430000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1307034636/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-430000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-430000: exit status 115 (102.813292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30476 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-430000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 config get cpus: exit status 14 (28.570625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 config get cpus: exit status 14 (29.495625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-430000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-430000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2921: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.63075ms)

                                                
                                                
-- stdout --
	* [functional-430000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:38:07.279802    2908 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:38:07.279948    2908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.279952    2908 out.go:304] Setting ErrFile to fd 2...
	I0731 14:38:07.279954    2908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.280070    2908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:38:07.281085    2908 out.go:298] Setting JSON to false
	I0731 14:38:07.297833    2908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2251,"bootTime":1722459636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:38:07.297919    2908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:38:07.302732    2908 out.go:177] * [functional-430000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 14:38:07.309769    2908 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:38:07.309785    2908 notify.go:220] Checking for updates...
	I0731 14:38:07.316687    2908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:38:07.319685    2908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:38:07.322752    2908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:38:07.325681    2908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:38:07.328690    2908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:38:07.331923    2908 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:38:07.332156    2908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:38:07.335612    2908 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 14:38:07.342687    2908 start.go:297] selected driver: qemu2
	I0731 14:38:07.342693    2908 start.go:901] validating driver "qemu2" against &{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:38:07.342744    2908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:38:07.349699    2908 out.go:177] 
	W0731 14:38:07.353670    2908 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 14:38:07.357551    2908 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-430000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.866917ms)

                                                
                                                
-- stdout --
	* [functional-430000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:38:07.167070    2904 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:38:07.167186    2904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.167191    2904 out.go:304] Setting ErrFile to fd 2...
	I0731 14:38:07.167194    2904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:38:07.167327    2904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
	I0731 14:38:07.168797    2904 out.go:298] Setting JSON to false
	I0731 14:38:07.186252    2904 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2251,"bootTime":1722459636,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 14:38:07.186372    2904 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:38:07.189698    2904 out.go:177] * [functional-430000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 14:38:07.197682    2904 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 14:38:07.197777    2904 notify.go:220] Checking for updates...
	I0731 14:38:07.205676    2904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	I0731 14:38:07.208619    2904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 14:38:07.211629    2904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:38:07.214659    2904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	I0731 14:38:07.215907    2904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:38:07.218900    2904 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:38:07.219147    2904 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:38:07.223639    2904 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 14:38:07.228701    2904 start.go:297] selected driver: qemu2
	I0731 14:38:07.228708    2904 start.go:901] validating driver "qemu2" against &{Name:functional-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:38:07.228762    2904 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:38:07.234651    2904 out.go:177] 
	W0731 14:38:07.238630    2904 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 14:38:07.242697    2904 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [38007c70-a2b9-4003-b5fe-e9f7afca78c5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004467625s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-430000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-430000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e7f27a8a-ffdd-407e-bed8-2e13ca08d440] Pending
helpers_test.go:344: "sp-pod" [e7f27a8a-ffdd-407e-bed8-2e13ca08d440] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e7f27a8a-ffdd-407e-bed8-2e13ca08d440] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003789666s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-430000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-430000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [46422ee5-c490-4edb-a135-d02d0b55977d] Pending
helpers_test.go:344: "sp-pod" [46422ee5-c490-4edb-a135-d02d0b55977d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [46422ee5-c490-4edb-a135-d02d0b55977d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003741334s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-430000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh -n functional-430000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cp functional-430000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd89884408/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh -n functional-430000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh -n functional-430000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1913/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /etc/test/nested/copy/1913/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1913.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/1913.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1913.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /usr/share/ca-certificates/1913.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/19132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/19132.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/19132.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /usr/share/ca-certificates/19132.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-430000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "sudo systemctl is-active crio": exit status 1 (103.523166ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2752: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-430000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dc2c0ed5-5c2a-4dcf-8973-089f67981deb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dc2c0ed5-5c2a-4dcf-8973-089f67981deb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.001811542s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-430000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.98.94 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-430000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-430000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-430000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-vn8mp" [db26b016-be70-4b15-a759-a1c990da3063] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-vn8mp" [db26b016-be70-4b15-a759-a1c990da3063] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00422925s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service list -o json
functional_test.go:1494: Took "280.923125ms" to run "out/minikube-darwin-arm64 -p functional-430000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30873
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30873
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "85.595ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.694167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "83.860958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.634334ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2963105874/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722461879165022000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2963105874/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722461879165022000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2963105874/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722461879165022000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2963105874/001/test-1722461879165022000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.339958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 21:37 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 21:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 21:37 test-1722461879165022000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh cat /mount-9p/test-1722461879165022000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-430000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1bdda2ae-158d-46ff-a78a-cefba7da761e] Pending
helpers_test.go:344: "busybox-mount" [1bdda2ae-158d-46ff-a78a-cefba7da761e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0731 14:38:02.178644    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [1bdda2ae-158d-46ff-a78a-cefba7da761e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1bdda2ae-158d-46ff-a78a-cefba7da761e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004431125s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-430000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2963105874/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port23762041/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.197875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port23762041/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "sudo umount -f /mount-9p": exit status 1 (60.988834ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-430000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port23762041/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount1: exit status 1 (66.72225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount2: exit status 1 (56.606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-430000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-430000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2045101671/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 version -o=json --components
2024/07/31 14:38:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-430000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-430000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-430000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-430000 image ls --format short --alsologtostderr:
I0731 14:38:16.059244    3070 out.go:291] Setting OutFile to fd 1 ...
I0731 14:38:16.059385    3070 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.059388    3070 out.go:304] Setting ErrFile to fd 2...
I0731 14:38:16.059391    3070 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.059531    3070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:38:16.059918    3070 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.059981    3070 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.060764    3070 ssh_runner.go:195] Run: systemctl --version
I0731 14:38:16.060775    3070 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/functional-430000/id_rsa Username:docker}
I0731 14:38:16.088188    3070 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-430000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/kicbase/echo-server               | functional-430000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-430000 | 04d3bbee3f384 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-430000 image ls --format table --alsologtostderr:
I0731 14:38:16.272929    3081 out.go:291] Setting OutFile to fd 1 ...
I0731 14:38:16.273062    3081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.273066    3081 out.go:304] Setting ErrFile to fd 2...
I0731 14:38:16.273068    3081 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.273206    3081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:38:16.273611    3081 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.273678    3081 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.274495    3081 ssh_runner.go:195] Run: systemctl --version
I0731 14:38:16.274504    3081 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/functional-430000/id_rsa Username:docker}
I0731 14:38:16.304218    3081 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-430000 image ls --format json --alsologtostderr:
[{"id":"04d3bbee3f3847d52b95930a08a9596e2e20bf9ed0b550ea91f00ec977e0f0ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-430000"],"size":"30"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363
caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[]
,"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-430000"],"size":"4780000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],
"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-430000 image ls --format json --alsologtostderr:
I0731 14:38:16.200648    3077 out.go:291] Setting OutFile to fd 1 ...
I0731 14:38:16.200839    3077 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.200843    3077 out.go:304] Setting ErrFile to fd 2...
I0731 14:38:16.200845    3077 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.200984    3077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:38:16.201471    3077 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.201534    3077 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.202298    3077 ssh_runner.go:195] Run: systemctl --version
I0731 14:38:16.202306    3077 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/functional-430000/id_rsa Username:docker}
I0731 14:38:16.228455    3077 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-430000 image ls --format yaml --alsologtostderr:
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 04d3bbee3f3847d52b95930a08a9596e2e20bf9ed0b550ea91f00ec977e0f0ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-430000
size: "30"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-430000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-430000 image ls --format yaml --alsologtostderr:
I0731 14:38:16.129912    3073 out.go:291] Setting OutFile to fd 1 ...
I0731 14:38:16.130067    3073 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.130071    3073 out.go:304] Setting ErrFile to fd 2...
I0731 14:38:16.130073    3073 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.130206    3073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:38:16.130625    3073 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.130685    3073 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.131502    3073 ssh_runner.go:195] Run: systemctl --version
I0731 14:38:16.131511    3073 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/functional-430000/id_rsa Username:docker}
I0731 14:38:16.157374    3073 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-430000 ssh pgrep buildkitd: exit status 1 (59.87825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr: (1.564722666s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-430000 image build -t localhost/my-image:functional-430000 testdata/build --alsologtostderr:
I0731 14:38:16.232799    3079 out.go:291] Setting OutFile to fd 1 ...
I0731 14:38:16.233058    3079 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.233062    3079 out.go:304] Setting ErrFile to fd 2...
I0731 14:38:16.233065    3079 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:38:16.233196    3079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1411/.minikube/bin
I0731 14:38:16.233660    3079 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.234455    3079 config.go:182] Loaded profile config "functional-430000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:38:16.235370    3079 ssh_runner.go:195] Run: systemctl --version
I0731 14:38:16.235379    3079 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1411/.minikube/machines/functional-430000/id_rsa Username:docker}
I0731 14:38:16.262579    3079 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2141897846.tar
I0731 14:38:16.262660    3079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 14:38:16.267988    3079 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2141897846.tar
I0731 14:38:16.269897    3079 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2141897846.tar: stat -c "%s %y" /var/lib/minikube/build/build.2141897846.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2141897846.tar': No such file or directory
I0731 14:38:16.269913    3079 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2141897846.tar --> /var/lib/minikube/build/build.2141897846.tar (3072 bytes)
I0731 14:38:16.279237    3079 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2141897846
I0731 14:38:16.282777    3079 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2141897846 -xf /var/lib/minikube/build/build.2141897846.tar
I0731 14:38:16.286418    3079 docker.go:360] Building image: /var/lib/minikube/build/build.2141897846
I0731 14:38:16.286473    3079 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-430000 /var/lib/minikube/build/build.2141897846
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:fcc5d7275c2d5153467c3ea210a615aa9675c5373a38d41946168b319ee9587c done
#8 naming to localhost/my-image:functional-430000 done
#8 DONE 0.0s
I0731 14:38:17.703934    3079 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-430000 /var/lib/minikube/build/build.2141897846: (1.417195667s)
I0731 14:38:17.703996    3079 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2141897846
I0731 14:38:17.707826    3079 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2141897846.tar
I0731 14:38:17.711701    3079 build_images.go:217] Built localhost/my-image:functional-430000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2141897846.tar
I0731 14:38:17.711716    3079 build_images.go:133] succeeded building to: functional-430000
I0731 14:38:17.711718    3079 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.649249625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-430000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-430000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image load --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image save kicbase/echo-server:functional-430000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image rm kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-430000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 image save --daemon kicbase/echo-server:functional-430000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-430000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-430000 docker-env) && out/minikube-darwin-arm64 status -p functional-430000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-430000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-430000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-430000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-430000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-430000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-875000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0731 14:40:18.313514    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
E0731 14:40:46.025590    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-875000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m7.732101042s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (187.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-875000 -- rollout status deployment/busybox: (2.895006s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-74xql -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-t5z9z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-w6svk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-74xql -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-t5z9z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-w6svk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-74xql -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-t5z9z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-w6svk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-74xql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-74xql -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-t5z9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-t5z9z -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-w6svk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-875000 -- exec busybox-fc5497c4f-w6svk -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-875000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-875000 -v=7 --alsologtostderr: (52.983446625s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-875000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp testdata/cp-test.txt ha-875000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile259060898/001/cp-test_ha-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000:/home/docker/cp-test.txt ha-875000-m02:/home/docker/cp-test_ha-875000_ha-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test_ha-875000_ha-875000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000:/home/docker/cp-test.txt ha-875000-m03:/home/docker/cp-test_ha-875000_ha-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test_ha-875000_ha-875000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000:/home/docker/cp-test.txt ha-875000-m04:/home/docker/cp-test_ha-875000_ha-875000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test_ha-875000_ha-875000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp testdata/cp-test.txt ha-875000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test.txt"
E0731 14:42:25.979783    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:42:25.985423    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:42:25.995489    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile259060898/001/cp-test_ha-875000-m02.txt
E0731 14:42:26.017968    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
E0731 14:42:26.060424    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m02:/home/docker/cp-test.txt ha-875000:/home/docker/cp-test_ha-875000-m02_ha-875000.txt
E0731 14:42:26.140687    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test_ha-875000-m02_ha-875000.txt"
E0731 14:42:26.302828    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m02:/home/docker/cp-test.txt ha-875000-m03:/home/docker/cp-test_ha-875000-m02_ha-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test_ha-875000-m02_ha-875000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m02:/home/docker/cp-test.txt ha-875000-m04:/home/docker/cp-test_ha-875000-m02_ha-875000-m04.txt
E0731 14:42:26.624992    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test_ha-875000-m02_ha-875000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp testdata/cp-test.txt ha-875000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile259060898/001/cp-test_ha-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m03:/home/docker/cp-test.txt ha-875000:/home/docker/cp-test_ha-875000-m03_ha-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test.txt"
E0731 14:42:27.266834    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test_ha-875000-m03_ha-875000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m03:/home/docker/cp-test.txt ha-875000-m02:/home/docker/cp-test_ha-875000-m03_ha-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test_ha-875000-m03_ha-875000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m03:/home/docker/cp-test.txt ha-875000-m04:/home/docker/cp-test_ha-875000-m03_ha-875000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test_ha-875000-m03_ha-875000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp testdata/cp-test.txt ha-875000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile259060898/001/cp-test_ha-875000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m04:/home/docker/cp-test.txt ha-875000:/home/docker/cp-test_ha-875000-m04_ha-875000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000 "sudo cat /home/docker/cp-test_ha-875000-m04_ha-875000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m04:/home/docker/cp-test.txt ha-875000-m02:/home/docker/cp-test_ha-875000-m04_ha-875000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m02 "sudo cat /home/docker/cp-test_ha-875000-m04_ha-875000-m02.txt"
E0731 14:42:28.549067    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/functional-430000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 cp ha-875000-m04:/home/docker/cp-test.txt ha-875000-m03:/home/docker/cp-test_ha-875000-m04_ha-875000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-875000 ssh -n ha-875000-m03 "sudo cat /home/docker/cp-test_ha-875000-m04_ha-875000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0731 14:50:18.302656    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.959448375s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-338000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-338000 --output=json --user=testUser: (2.017818167s)
--- PASS: TestJSONOutput/stop/Command (2.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-015000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-015000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.953833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"67aa8446-ea29-45bc-8fd4-e7fece1190cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-015000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f73679c4-c696-4a70-abfa-51b5ea758d54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"a3bae346-47da-43ac-9fda-e408342fbb05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig"}}
	{"specversion":"1.0","id":"0029cbb1-0fba-4e1a-9732-e0faf20a45d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"17b9d356-d910-4e0c-ab15-c8eb025225d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f358771-5bbc-492d-af54-c3ee9e8f585b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube"}}
	{"specversion":"1.0","id":"705f013e-07c5-4952-b4f0-fd5e7706acf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d84ff159-6a1b-4a4e-b682-81f0853a32b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-015000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-256000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.967208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-256000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-256000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-256000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.097041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-256000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.697570542s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0731 15:15:18.284040    1913 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1411/.minikube/profiles/addons-941000/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.746218708s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-256000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-256000: (3.420799541s)
--- PASS: TestNoKubernetes/serial/Stop (3.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-256000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-256000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.677166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-256000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-609000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-233000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-233000 --alsologtostderr -v=3: (2.86098925s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-233000 -n old-k8s-version-233000: exit status 7 (54.2705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-233000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-428000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-428000 --alsologtostderr -v=3: (3.874432667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-428000 -n no-preload-428000: exit status 7 (41.357291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-428000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-511000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-511000 --alsologtostderr -v=3: (3.245958958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-416000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-416000 --alsologtostderr -v=3: (3.447984208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-511000 -n embed-certs-511000: exit status 7 (54.829416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-511000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-416000 -n default-k8s-diff-port-416000: exit status 7 (57.305084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-416000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-529000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-529000 --alsologtostderr -v=3: (3.268349s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-529000 -n newest-cni-529000: exit status 7 (53.828167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-529000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-531000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-531000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-531000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-531000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-531000"

                                                
                                                
----------------------- debugLogs end: cilium-531000 [took: 2.1748535s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-531000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-540000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-540000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard